Stanford’s Evelyn Douek and Alex Stamos bring you the latest in online trust and safety news and developments, including two new Supreme Court cases, a Chinese influence operation targeting the US ahead of the mid-terms and PayPal’s accidental foray into content moderation discourse.
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:
Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.
Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.
Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Alex Stavros:
This is really interesting because this is the first example we have of the PRC doing a Russian style influence campaign circa 2016, 2017 of saying, "We are Americans and this is what we believe. And trying to trick people into thinking it". Not actually that advanced, not that interesting, they are certainly years behind what the Russians do now on influenced campaigns in this space. But it is a definite change in doctrine for the PRC and I think it is going to be a big deal for us to pay attention to in 2022.
Evelyn Douek:
Welcome to Moderated Content, a podcast about content moderation, moderated by me Evelyn Douek. Today Alex Stavros and I are starting something new. From now on we'll be in your feed about once a week with the quick hit updates from the last week in content moderation, platform governance and trust and safety and the like. In case you don't know him, where have you been? Alex is the Director of the Stanford Internet Observatory and had a few jobs I think before that, but he has asked me before I begin as the lawyer on the show to make the disclaimer that anything we say next does not reflect the views of employers, past or present, especially not Stanford, any of its faculty, students, alumni, donors, and may not even reflect the views of the person that says them. Anything I missed, Alex?
Alex Stavros:
No, that's about it. Hold on to your butts. Here we go.
Evelyn Douek:
Yeah, that's right. All right, let's dive in. So obviously the place we have to start is in my neck of the woods with the big news from the Supreme Court that granted cert in two cases that will determine the scope of platform liability for content that uses posts on their sites. As you know, we know, as everyone knows these days, I get asked about it by people at parties that don't even work in this field. It's about section 230, which is the famous section of Federal Law that shields platforms in most cases, from liability for the content that users post. There's two cases here and I want to make sure that we focus on both of them because the first one has got most the attention, which is Gonzales v Google, and that's about whether section 230 shields platforms for content that they promote or recommend to users.
Evelyn Douek:
So what the question there is, is once a platform is using algorithms to promote content into other people's feeds, into users' feeds, do they then become not protected by section 230 because they're taking such an active role in the spread of that content. But I also really want to highlight the second case, which is Twitter v Tamanna, and that's about whether platforms can be found to have aided and embedded terrorism by having terrorist content on their services. This is a case bought by the family of a victim of a 2017 terrorist attack in Istanbul, which claims that Twitter, Google, and Facebook aid in a better terrorism by allowing ISIS on their platforms in violation of the Anti-Terrorism Act.
Evelyn Douek:
So these are really interesting cases. I think that the second one is a real dark horse here, while people are focusing on section 230, if you can find that a platform has aid in a better a crime by simply having related content on their platform, think of the implications of that. That's not just in terrorism. We're going to see a bunch of laws coming from states about aiding and abetting abortions, for example. And so I really think that that one is one that we want to watch.
Alex Stavros:
So I think they're both interesting. On the first one, on the recommendation algorithms, we've been seeing this of some people suggesting that we weaken 230 to try to distinguish more against the platforms carrying other people's speech versus what is effectively their speech. And I know that in First Amendment law you have all of this crazy, distinguishing bet when is somebody a publisher or not. And that in a situation where you're making recommendation that you're starting to say, "Well that's actually your speech". And so I think something like that makes logical sense. The challenge here with Google is everything Google does that is useful to people, is a recommendation.
Alex Stavros:
So if Google has strict liability for anything in their search index, I don't see how Google search exists. If they're responsible for absolutely everything that's in Google search, then even Google with their almost trillion dollars in market cap, I don't see how they could afford to be responsible for everything on the internet. And so while it makes a little bit of logical sense, I don't see how any implementation of this would allow for search engines to exist beyond some standard where it has to be extremely explicit that they're making a recommendation that's almost human based and intentional.
Speaker 3:
I mean, you can imagine situations where you carve out recommendations that come based on a user request. So if a user types in give me ISIS content and the algorithm, then it's not that the platform is trying to push that content out, it's just responding to user requests. That may be different. But then, all the line drawing exercises come in. If a user subscribes to a particular channel and then that gets "promoted" to them, does that fall into that category? And basically everything on a platform is somehow based on signals that the user is giving to the platform about what it wants to see, what they want to see. And so it's really, really hard to draw any distinguishing line.
Alex Stavros:
Yeah. You can see tons of litigation around whether something's explicit or implicit. And then for the second case, by 2017 the industry coalition to fight ISIS was pretty aggressive. So aggressive, I believe, you wrote an entire law review article about the possibility that if being too aggressive in almost antitrust violation. So that does seem interesting. I mean to me the big picture context on these cases is at the same time that you perhaps have a conservative Supreme Court that wants to punish tech companies because they feel like tech companies have been violating the rights of conservatives. Both of these cases are about not enough moderation.
Alex Stavros:
So there seems to be a consistent inconsistency of do you want companies to do more or less because the expectation of they're going to do exactly what any individual judge or any individual political actor wants, every single time exactly how that single political actor wants. And that is the only path that should have no liability. That feels like the belief of a child. So I'm not sure how you end up with a Supreme Court who feels like, hey, the tech companies are out of control, liberal woke, content moderation, scolds, but also they need to have more responsibility for taking down content up to strict liability for anything that they publish. There's a huge inconsistency there and I'm not sure how they're going to square that.
Speaker 3:
Yeah, it's just get it right. Doctrine of First Amendment law, very famous. For listeners that want more information about this, we will be doing a whole episode on it with Daphne Keller, our moderated content Supreme Court correspondent later in the week. But just one last thing to say on this case is that it's extraordinarily broad. This is a weird fact set for the court to pick up because there's no proof that the attacker in the 2017 attack were ever had accounts on these platforms. The plaintiffs aren't identifying any particular piece of content that should have been taken down. So it really is just the idea that the platforms weren't overly aggressive and as Alex said by that stage, they were taking significant action. So the next ones for you, Alex. Meta took down the first targeted Chinese campaign to interfere in US politics ahead of the midterm elections. Tell us about that.
Alex Stavros:
So this is a really interesting research that came out of Meta's coordinated influence group that the team that looks for government actions and what we've been seeing over the last several years is a really fast development of disinformation, influence operation capabilities out of the people's Republic of China. Traditional view of Chinese influence ops is that they used to be very focused on their near abroad, especially people who speak different dialects of Chinese. So one of our colleagues here at Stanford, Gen Pan wrote a famous article that was part of her PhD at Harvard about the 50 cent Army, which was the group of individuals in China who were empowered by the Chinese Communist Party to actually have internet access. So these are the only people who were allowed to get to Western platforms and who then carried the Chinese message around the world, possibly up to a million people designated as such.
Alex Stavros:
A real potent force. Except they were very, very focused on Chinese politics, Taiwanese politics, Hong Kong politics. And as you can imagine, if you pick a million random Chinese citizens, their language skills are mostly going to be focused on a variety of dialects of Chinese. And so that 50 cent army was never really that effective in English or German or Spanish or any languages you might want to have influence ops against your potential adversaries. And that has changed a lot. What really accelerated Chinese capabilities here was the COVID crisis as well as the Hong Kong crisis. In both situations that Chinese found themselves really being out flanked.
Alex Stavros:
But what we saw was the development of English language capabilities or German capabilities and Spanish capabilities about Chinese topics. So about COVID, about Hong Kong. This is really interesting because this is the first example we have of the PRC doing a Russian style influence campaign circa 2016, 2017 of saying we are Americans and this is what we believe in trying to trick people into thick in it. Not actually that advance, not that interesting. They are certainly years behind what the Russians do now on influence campaigns in this space, but it is a definite change in doctrine for the PRC and I think it is going to be a big deal for us to pay attention to in 2022 and 2024.
Speaker 3:
So how much should we worry about it though? Because a lot of the reporting on these kinds of things, it's like, "Oh my god, the Chinese are getting into information operations. This is how it was around the Russians with the 2016 election". How effective is this stuff?
Alex Stavros:
That is the huge open, quantitative social science question is how much effect do these kinds of influence operations have? And the studies generally come back saying not that much to be honest. Just based upon general size, if you look at the size of the Russian campaign in 2016 versus the advertising that was being paid for by the parties, the advertising being paid for by the candidates, just the huge amount of conversation. It is a tiny, tiny, tiny fraction of the conversation about the 2016 election. Tiny, way less than one 10th of 1% of the conversation was being driven by Russian agents. And so it is very difficult to come up a scenario where Russian influence operations have been really effective in really big things like an election.
Alex Stavros:
I think where I get more concerned about influence ops are on the much smaller scale conversations where you might be have some impact. And so what I worry more about is individual house races. Let's say you have a house candidate who happens to be very anti PRC and they're going up against somebody who is neutral or has some Chinese interest from a business perspective. That's a place where perhaps you can actually have some impact on the local conversation. But on things like a presidential race where it's 80% of television and 50% of people's internet feed, the ability for them to affect that conversation I think is pretty minimal.
Evelyn Douek:
And just to speak up for foreigners here, that real local targeted impact doesn't have to be foreigners, it doesn't have to be Chinese. It could be in local elections, it could be local business interests who are really concerned with information operations if we want to call them that, although we don't necessarily call them bad when they're just in local campaigns. That's something that keeps me up at night as well.
Alex Stavros:
And I think you should be careful Evelyn, to not be a foreign influence.
Evelyn Douek:
That's right.
Alex Stavros:
[inaudible 00:11:31] this podcast.
Evelyn Douek:
I'm very careful about what I tweet. I do actually have to hand over my social media handles every time I apply for a visa. And so I just absolutely love this country's absolute is free speech tradition. It feels really good.
Evelyn Douek:
So a story that didn't get covered very much this week or doesn't attract a lot of attention, it's an industry that doesn't attract a lot of attention. So Spotify announced that it's acquiring content moderation, company Kinson, bringing the tools, the hate speech moderation tools that Kinson has in house. It had contracted with them before, but this is an acquisition now and this is something that we see happening a lot more across the industry where you see these startups creating specific content moderation tools and then larger platforms who bring them up.
Evelyn Douek:
Of course, Spotify has had its content moderation practices in the spotlight over the last year or so with the Joe Rogan controversy. And I think it just really highlights how this is becoming a massive industry. Content moderation may have just been an afterthought when this all started up, but it is now big business. There's lots of money going into this. There's lots of venture capitalists getting interested in this and that's only going to be supercharged as there's regulation all around the world that puts certain responsibilities on platforms to do more content moderation and they don't necessarily have the expertise in house to do all of that. They're going to be looking for companies outside who can provide that service. Basically compliance tools.
Alex Stavros:
Yeah, I do think this is another piece of evidence of a thesis I've had for a while, which is that the trust and safety space is about 20 years behind information security. 20 years ago, we didn't have classes, we didn't have textbooks, you didn't have a wide range of companies providing information security services and now you've got a whole wide range. But if today you are a new platform, you have user sharing content and you end up with a child abuse problem, you end up with a hate speech problem, you end up with terrorist who want to use your platform, you end up with a terrorist watch list/treasury sanctions problem. Who do you turn to? And the answer is there's actually a pretty small number of companies. So we've had a couple of these acquisitions, a famous one from Discord. Discord also in the height of a lot of their content moderation problems, bought a company that was focusing on trust and safety algorithms.
Alex Stavros:
And so I think you'll start to see this, and here in Silicon Valley, this is a key part of the virtual cycle of the creation of new companies is the aqua hire or acquisition for large amounts of money. I don't think they disclosed the size of this one, but that is a key signaling effect to venture capitalists that there is a soft landing for those of their investments that do not become unicorns or don't IPO. And so as we have more of these, I think it will trigger a lot more venture capital investment in the trust and safety space.
Evelyn Douek:
I was at the Industry Conference Trust Con last year, the first industry conference in this space, and the number of times I got pitched for software that is absolutely solving this problem was amazing and could just see people's faces full when I said I was an academic and absolutely not going to be buying any of this content moderation services. But I do think it's clearly booming. This is something though that I worry a little bit about for the reasons that in that article that you mentioned earlier, Alex, thank you. I'll send you a check afterwards, content cartels where we might see a standardization of content moderation across platforms because who are going to have the best content moderation tools.
Evelyn Douek:
It's going to be the big platforms that have the resources and the data to train these tools on. And Facebook might say to a young platform, "Hey look, here's some content moderation tools I prepared earlier. Do you want to plug into those"? And so instead of having a diversity of different options, if a small platform's just looking at it as a compliance risk, I want to comply with the EU Digital Services Act and we know that Facebook is basically complying and so I can just pick up this software and plug it in, that we might see more uniformity across the industry.
Alex Stavros:
Yeah, no, and totally, the Facebook theory is one I've had for a while, which I still think a massive untapped revenue source for Facebook, which Facebook is desperate for revenue right now. Stock is way down. Zuck has spent all this money on these stupid headsets to convince people to spend all their time in the metaverse. Write it down now, October 2022, I said the metaverse was silly. Everybody can make fun of me in the future when we're all plugged in all day in 3D worlds, but they're looking for revenue. They're getting their butt kicked by TikTok. And I think there is a real opportunity to become a Facebook web services to go the Amazon route and to use stuff that was used internally.
Alex Stavros:
But like you said, say you're a small company, you go to the new Facebook web services to get your hate speech moderation, even if you're making your own decisions, the classifier Facebook uses is going to be based upon Facebook's standard of all of the labeling they've done of what is and is not hate speech, for example. And so you're right, this is definitely, we're heading towards a world that if the long tail of companies are required to have the same level of content moderation as a Facebook or Google or Microsoft or an Amazon, then they're going to have to probably outsource to those companies.
Evelyn Douek:
Just want to put it on the record, I am bullish on the metaverse as someone that works out in it quite frequently. So there you go. Not normally.
Alex Stavros:
You're in way better shape than I am. There's your endorsement. If I like the Metaverse, I'd be skinny.
Evelyn Douek:
That is the advertising campaign coming out in 2023. All the hot people are in the metaverse. You can't tell, but trust us.
Alex Stavros:
I mean I look great in her writing.
Evelyn Douek:
Yeah, that's right.
Alex Stavros:
Maybe you and I could be the new Mac in PC ad.
Evelyn Douek:
Oh, that's excellent. Okay, so this is a fun one. Speaking of everybody getting into content moderation. PayPal got into a bit of trouble over the last week after some rules came out that it was going to fine users $2,500 if they used their services to promote misinformation. A lot of conservative outlets pick this up. And quite rightly, I think it would be weird if a payment service provider just started policing content that that users were pushing or using money for. In the end, the company just said, "Oops, that user agreement was sent out in error and ironically was misinformation". The spokesperson said that the announcement that came out included incorrect information related to company policy. So I hope someone internally is getting fined.
Alex Stavros:
Yeah, this one's bizarre because hey, I can't think of another situation. I don't think there's one where PayPal fines you for something. PayPal kicks people off PayPal all the time. You do some fraud, you steal some money, you run a GoFund me and you say I'm dying of cancer, but you're really not. They kick people off for that stuff all the time. They don't fine them and then just say, You keep on using our service. So this one's just bizarre to me and it's like definitely not the direction you want to go. You should have rules of we're not going to support certain kinds of actors. We're not going to allow the terrorist watch list and we're not going to have people who are raising money to hurt. And then if you use us for fraud, sure we're going to kick you off and then maybe you freeze the funds because it's going to be part of a legal issue. But to then just take a little bit of your money of like, hey, do better next time on your disinformation seems completely a ridiculous place for PayPal to be in our society.
Evelyn Douek:
Nice pot of money you got sitting there on our services, would hate for you to lose a little bit of it for posting misinformation. I think this really raises, this is an area that I think doesn't get enough attention as well, which is content moderation in the stack and what rules do we want payment services provider to have? And in my mind, definitely not the same rules that we would have at the application layer, but as you say, they do kick them off all the time.
Evelyn Douek:
We see this especially with terrorist content and terrorist groups, and that can have a lot of disparate impacts as well. We saw in Israel Palestine conflicts that a lot of people were getting kicked off in Palestine by people like Venmo and PayPal because of Hamas being on the lists. And there's not a lot of transparency. There's not a lot of accountability further down in the stack than the application layer. And so these are really interesting issues, but just taking money out of people's accounts seems a pretty radical way of dealing with that. But at least we have retention and we can have debates about it now.
Alex Stavros:
It's great. I appreciate PayPal giving us the ability to argue about this.
Evelyn Douek:
And good podcast content, which is really all we want.
Alex Stavros:
Today's podcast brought to you by PayPal.
Evelyn Douek:
Yeah, that's right. Unless there's misinformation in it, in which case, never used the service. Don't know anything about it. Any other stories that you want to highlight this week, Alex?
Alex Stavros:
Yeah, I think we've had a couple of other interesting things. Here in California, we have a new law around what's been called cyber flashing. I'm actually happy to see these laws because, you and I, we spend all this time today already talking about misinformation and influence operations. But when you look at the use of online platforms to cause harm, there's this whole long tail of really mundane things that happen every day to normal people that don't have to do with geopolitics, that don't have to do with the world ending, but really are harmful. And one of those things is women opening up their phones and every single interface they have, being full of unsolicited pictures of male genitalia. That's what I will say because this is a Stanford podcast. I will not use the normal colloquial for this term. And then the other is your NCI, aka Revenge board is the use of images from generally intimate partners who then get reused for harmful purposes.
Alex Stavros:
That's the stuff that we spent all this time talking about misinformation with, as we just discussed, really nebulous impacts. But we don't think about every single day, hundreds of women are going to have their photos shared in a way that is going to be very harmful. And in some cases that's not even illegal. And so I am glad to see instead of another misinformation law or a law like Texas's, that California's actually paying attention to, here is something that pretty clearly should be a crime. Flashing people just happens to be virtually, and for which there's very little first amendment argument about it. Let's take care of that instead of just spending more time talking about vaccine disinformation, which is one of the hardest things to come up with a good societal response.
Evelyn Douek:
And it's a good instance of saying maybe just battering platforms around the head isn't the right way to solve this problem. Maybe legislatures and law enforcement can step up and deal with these things that, as you say, pretty obviously should be crimes like maybe voluntary action by platforms isn't going to be the most effective way to deal with this abuse. So I'm also really happy to see legislatures stepping up and not just having congressional hearings with the yell at platform executives again.
Alex Stavros:
Although those are a lot of fun to watch.
Evelyn Douek:
Right. Well, the first two or three might have been, now we are in the double digits, and I've got other things to do with my life.
Alex Stavros:
I'm not sure which percentage of my limited lifespan I've now spent during that pause, during which after Senator asks a question and Mark Zuckerberg waits while sipping his drink and then says, "Thank you for the question, Senator". I wish I had that time back.
Evelyn Douek:
Yeah. Although as far as I'm concerned, they can continue until we get Susan Majeski in front of Congress. I do not know how she has managed to duck the spotlight this long. So we just ski to the hill, 2023 is the new campaign.
Alex Stavros:
This is the year it's. You're going to make it happen.
Evelyn Douek:
Exactly this podcast. This podcast is the way to launch a campaign.
Alex Stavros:
Now you're a real professor, so I feel like if you could build an entire tenure committee packet based upon getting shoes of the hill, that means it's a real thing.
Evelyn Douek:
Yeah. I'll put all my research assistants on graphic design for the billboard posters that we're going to put up. Any other stories you want to want to mention?
Alex Stavros:
No, but I'm very glad that we're doing this podcast. It's great to see you. It's great to see you around campus here at Stanford, and I'm looking forward to next week.
Evelyn Douek:
Excellent. All right. Tune in next week for more. I would like to get some employee in Apple with an alert on their screen for such a large amount of really high good reviews on this podcast coming in a very short period of time. So if any of you are listening and enjoyed this, please go ahead, rate review, whatever. If you didn't enjoy it, go about your day and have a good afternoon.
Alex Stavros:
I think that was just an ask for a coordinated inauthentic behavior.
Evelyn Douek:
It absolutely was.
Alex Stavros:
Apple podcast platform. Okay, excellent. Help us out folks. Get there and vote early and vote often for this podcast.
Speaker 3:
This episode of Moderated Content wouldn't be possible without the research and editorial assistance of Jon Perino, amazing policy analyst at the Stanford Internet Observatory. It is produced by the marvelous Brian Pelletier. Special thanks also to Alyssa Ashdown, Justin Fu and Rob Homan. Show notes are available at our Stanford website.