Alex and Evelyn get Josh Tucker, a professor at New York University, and Solomon Messing, a Visiting Researcher Georgetown University, to talk them through the results of two major studies about the effects of online speech; they also discuss Twitter cutting off API access; public universities cutting off TikTok access; Apple promising more transparency about app store take-downs, and an update from the courtrooms.
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:
Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.
Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.
Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Alex Stamos:
I was dealing with the lake in the front yard, so my arc is only half constructed. We'll see if we get there.
Evelyn Douek:
I hope I can finagle myself a ticket to the arc when the flood comes.
Alex Stamos:
Well, I'm going to have two of every kind of tech policy commenter, right? So I need two First Amendment lawyers. I think you can help me with that.
Evelyn Douek:
Fantastic. I knew it. I knew that being a First Amendment lawyer would save my life one day.
Welcome to Moderated Content's Weekly news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. We have a surprising amount of content to get through today, so we are going to jump right in.
We talk a lot about wanting policy responses to be empirically informed when we are talking about tech policy, so we thought it was a good opportunity because there were two big studies that came out in the last week about the impacts of what people actually see online, which are worth paying attention to. Hopefully, policymakers will pay attention to them as they thinking about regulatory design. And also just policymakers, academics, tech platforms, thinking about the best way to deal with online speech.
We've got two of the authors with cameos today to talk about their findings. The first is Josh Tucker, who with five co-authors, wrote a paper in the journal Nature Communications titled Exposure to the Russian Internet Research Agency Foreign Influence Campaign on Twitter in the 2016 US Election and Its Relationship to Attitudes and Voting Behavior. I love social science titles, they are much more self-explanatory than law-journal titles, which always try to be catchy at the expense of clarity.
Here's Josh Tucker telling us about his study.
Josh Tucker:
Thanks so much for having me on the podcast this week to talk about our new study out in nature communications, about exposure to tweets from the Russian foreign influence attempt in the 2016 US election.
Our study had four major findings. Most importantly is to understand this is the first study of which we're aware that actually measured not what the Russian troll accounts were attempting to do, what the content of the tweets were, what networks they were embedding in, but actually who was exposed to the tweets. We were able to do this because we had a survey in the field during the 2016 elections where we talked to people in April of 2016, October of 2016, and then again right after the election. And in that intervening time, we were able to collect all of the tweets from all of the participants in the survey from all of the people who they followed, so we could crosscheck this list of 1.2 billion tweets to which our survey participants could have been exposed with the list of tweets that were produced by the Russian trolls that was released by Twitter. And here's what we found:
First, we found exposure to these tweets from the Russian trolls was incredibly highly concentrated. 1% of our sample accounted for almost 70% of the potential exposure to these tweets, and 10% of our sample accounted for 98% of the exposure. What that means is that 90% of our sample probably didn't see any tweets or at most just a handful from these Russian trolls, so heavily, heavily concentrated.
Second, we found that the relative prevalence of these tweets from the Russian trolls, this was not the only source of information people had about the election by a long shot. Indeed, we found that people got four times as many tweets from politicians and political parties in the same time as they did from these Russian trolls and close to 25 times as many tweets from media sources, and that's just on Twitter. So people who were exposed to these tweets from these trolls also had lots of other opportunities to get at information about the election.
Our third finding was that when we looked at who were the people who got exposed to more of these tweets, it was heavily concentrated among highly partisan Republicans, so strong Republicans, and indeed we found that strong Republicans in our sample, on average, got exposed to nine times as many tweets from these trolls as did Democrats or independents.
When you take all of these things together that the tweet exposure was highly concentrated, that there were lots of other potential sources of information about the elections on Twitter to say nothing of what was off of Twitter, and that the people who were exposed to these tweets were more likely to be the types of people who were going to definitely vote for Trump anyway, our final result becomes perhaps not as surprising as you might have thought ahead of time, which is that when we looked at whether or not there were changes in attitudes from people from April to October or whether changes in vote preference from people from April to the election time. And we looked at whether that was related to whether they had seen any tweets from these Russian trolls or the number of tweets that they had seen from the Russian trolls.
We found there was no correlation between becoming more polarized on political issues or becoming more or less likely to want to vote for one of the particular candidates or even more or less likely to say, "Not vote for Hillary Clinton by voting for an alternative candidate or not participating in the election at all." No real change. We couldn't find any relationship between seeing a lot of these tweets and being more likely to change your position on issue areas or on voter preference.
However, it's critical to put these results in context. This crucially does not mean that Russia never interfered in the 2016 election. In fact, our entire study is predicated on the fact that Russia was interfering in the 2016 election and that we could take at face value the tweets Twitter gave us as evidence that Russia interfered in the election.
Of course, we also know that there were many other ways that Russia tried to interfere in the election. Russia tried to interfere by on Facebook, Russia tried to interfere by purchasing ad on Facebook's, and of course there was the hack and leak where Russia could participate in hacking the DNC and then leaking that information at opportune times over the course of the campaign, has been well documented by Kathleen Hall Jamieson in her research and by others. So we want to be super clear, what we have studied in this, what we have done is we have given some evidence about one piece of the Russian attempt to interfere in the US 2016 elections.
Evelyn Douek:
Great, and here's Sol Messing, who with nine co-authors in nature human behavior published a study titled A 2-million Person Campaign Wide Field Experiment Shows How Digital Advertising Affects Voter Turnout. Again, extremely helpful title. Here's Sol explaining what they found.
Sol Messing:
There's two big takeaways from this paper. First, social media ads have very small effects on voting, far smaller than mini pundits and media commenters often assume. But even tiny effects can have pretty big implications for US Presidential elections thanks to the electoral college. Biden won Wisconsin by just 0.6 percentage points. Second, this paper shows that people who voted early were more responsive to ads. And this may be because in October the media is very saturated with politics, with advertising, and a lot of swing voters have already made up their minds. But if you reach someone in July, maybe they vote before they forget about the ad. We often see ad effects decay over time.
Now, because we're talking about really small effects, you need a big real-world experiment to start to understand how social media affects elections. Survey experiments can be unrealistic, those are A-B tests embedded in a survey. Correlational studies are tricky because politicians buy ads in competitive districts, so you can't just look at vote returns and spending, you really need the kind of big field experiment we conducted.
Now, the best past field experiments have shown that the effect is basically zero, but there's a lot of pushback from people who look at those studies and complain that they're too focused on a single ad. They're not looking at an entire ad campaign, much less the sum total of advertising people get. Mark Melman put it really nicely. One group ate a potato chip, the other had none. They were retested. Would you expect to find an effect on health? Obviously not.
So what we do is for the first time we create a control group for an entire eight-month, $8.9 million advertising campaign. We worked with ACRONYM, which is a pro-Democrat nonprofit and a pretty big spender in 2020. And these were served to people in five battleground states, so it's pretty realistic. My home state of Arizona, Wisconsin, Michigan, North Carolina and Pennsylvania, all very competitive states. And so what do we show? We show this campaign increased voting among Biden leaners by about 0.4 percentage points and decreased voting among Trump leaners by 0.3 percentage points for a difference of 0.7 percentage points. So we're seeing differential turnout of about the same margin that Biden won in Wisconsin, obviously in favor of Biden. And as I said earlier, we show that this effect is even stronger among early voters.
Some of the best evidence we have to date about the effects of ads on differential turnout.
Evelyn Douek:
So I think there's a bunch to dig into here, and ironically, I've seen a lot of different takeaways floating around online about what these studies say. So just to start, Alex, what's your headline takeaway from these together?
Alex Stamos:
Like you said, there's a real Rorschach test that just like anything else, it turns out you can interpret papers in nature through a culture war lens.
My takeaway is for years, lots of people have been overestimating the impact of social media platforms probably overall and in particular these two things. Now, you heard Josh be very careful about framing that he only looked at a very specific part of the Russian internet research agency's activity in 2016. So he did not look at all GRU activity, which is the hack and leak, and he didn't look at IRA activity on Facebook and elsewhere.
I think the other IRA activity was really the same content. It was of different volume, but there's no effect that I think would be significantly different on the IRA, he just happened to have Twitter data from doing a survey at that time, and so they got very lucky that they just happened to be looking at what people are seeing on Twitter in real time.
Like Josh said, the IRA was probably not hugely effective, I've been saying this for years, and usually I get called a tech bro apologist and all this kind of stuff, and so it's nice to have a little bit of empirical data in nature, which is one of the top journals, a big month for Nature and quantitative social science. And it's just nice to have empirical evidence backing up what was honestly pretty obvious to me, which is, one, the IRA activity is a drop in the bucket of the overall discussion of the 2016 election. In the year 2016, the most discussed topic in the entire planet was Hillary Clinton versus Donald Trump. To the backdrop of that, any kind of concerted campaign is still going to be probably pretty minimal for the amount of content being created by lots and lots of partisan actors and then also nonpartisan actors.
The other thing that was always obvious that people never... I've been saying this since 2017, that people haven't really paid attention to is the IRA campaign was not mostly about the election. The vast majority of the content from the internet research agency did not mention a candidate, did not mention the election. It was about stirring up people on other topics. And again, on those other topics, there's so much more stuff out there. So I mean, I'm glad Tucker did this. It does demonstrate that those kinds of coordinated campaigns that we should not over pivot on what their impact is.
Evelyn Douek:
I mean, it's pretty clear that this sort of hypodermic needle theory of how social media works is false. This study came out, it was widely, widely reported on, and a lot of social media researchers are like, "Well, duh, we knew this all along." But I think it's really good to have that evidence and hopefully policy makers pay attention to it. This is not something like you get this content directed straight into your brain and suddenly you are transformed from a Hillary voter to a Trump voter.
Alex Stamos:
And it's good because this paper is finally broken through. There's been some other studies that have shown the same stuff, but this one seems to be the one that's finally broken through to what you might want to call the New York Times consensus of the 500 people in DC and New York who decide what the conventional wisdom is on these things, finally understand what the rest of us have been saying for years.
That being said, talking about the lens, it does not mean that Russia did nothing and I'm glad Tucker talked about that because his research is being weaponized by people who want to deny Russia and everything. And like he said, the whole thing is predicated on the Russians actually did something. And it doesn't mean the Russians weren't effective in the big picture. The GRU activity I think was much more effective because hacking all of those email accounts, hacking the DNC, hacking John Podesta, and then getting media outlets to run them change the entire tenor of the conversation. Now, that becomes this much more complicated discussion because you've got James Comey making decisions and you've got all of these different political actors making different decisions based upon what's going on, and it's very hard to figure out what they would've done without this. But the whole, but-her-emails thing was clearly a bigger deal in the last couple months of the election than the relatively tiny amount of content from the internet research agency.
Evelyn Douek:
I think it was really great that Josh was really clear about this both in the memo and how he's been talking about it more generally. Because I was going to ask you, Alex, in the time since you worked at Facebook sniffing this stuff out, but you, you've set up a Stanford Internet observatory, you write a ton of time searching out information operations, you write a ton of reports. Does this study make you want to hang up your hat and go, "Well, we didn't need to worry about this at all"?
Alex Stamos:
No, because I think there are things that you have to... One, I think it's important for just to understand what countries are trying to do. Every country in the world is trying to manipulate the online information environment to their benefit. So it's just important for us to document that so that people can understand what's going on and take steps, even if the impact is sometimes not that great. Second, again, this was the most discussed then in 2016. It is also an issue for which really goes to people's identity, that is an incredibly deeply held. I still think, and I would love to see research in this area now, it's a lot harder just because you're talking about much smaller data sets of the attempts to manipulate political beliefs of Americans on topics for which they don't have deeply held beliefs.
The number one topic of the internet research agency is actually domestic Russian politics. So most of the work the IRA does is to shore up Putin domestically, but probably the second largest. And something as a huge target of GRU was Syria, was a situation in Syria and a big focus was trying to get progressives in the West, in the United States and in Europe to believe that any kind of intervention by the West was imperialistic and that Assad was basically the good guy. The Russians were the good guys, the Americans and the people who were fighting against Assad were the bad guys.
And that is an area where internal Syrian politics, not a lot of people have strongly-held culture war unless you're from the Middle East feelings about it. And I think it would be interesting to see how effective that was because that is a campaign that was not just about this kind of trolling content, but about creating fake journalists, creating fake people on the ground, creating entire lefty progressive outlets, mostly on the left, to push the idea that Assad was somebody they should champion. And you do see that kind of stuff from people on the far left.
For those other topics, Saudi Arabia and Yemen, not a lot of Americans had a lot of opinions on that. So if you see a piece of propaganda, it might be the only thing you see on that topic that month versus 20 things about Clinton-Trump in a day. So I still think it's a big deal.
And then the third reason is the coordinated campaigns to manipulate the platforms is a lot, I think, less about just general blowing stuff out and more about hurting individuals. What we've seen a move to is these trolling campaigns are all about individual destruction of people they disagree with. So if you're especially a woman, but if you're a Chinese dissident and you leave China and you live in Australia and you write about the Chinese Communist Party, you'll have thousands of trolls who spread things about your lies, about your sexual history and about this and that and this that, horrible, horrible things to make your entire family and your friends and everybody disown you. That's the kind of thing that has huge individual impact on that person, and still I think we need to focus on.
Evelyn Douek:
You're absolutely right. All of the studies show that it's really, really hard to move the needle on something that people already have deeply held beliefs about or whether there's lots of content that they're being swamped with, which brings us to the study that Sol talked about.
As you said, it's a bit of a Rorschach test. I saw this being discussed in different ways online with half people being like, "You see? This show is that digital advertising has no effects." And then half of people say, "You see? We show that digital advertising can have really meaningful effects." I saw you tweeted saying that this study really calls into question the assumptions people have had about the effectiveness of political ads since Obama two. Can you unpack what you were meaning by that?
Alex Stamos:
I think people forget that the first major political campaign in the US that really effectively used social media advertising was the reelection campaign for President Obama in 2012. At the time, this was not seen as a scandal. The people who did that went around and gave talks. In fact, I saw a really good talk by his team at a Amazon Web Services conference because they're also very early users of AWS and talked a lot about the architecture and the data, and they gathered up a lot of data, I think legally, and perhaps not in a completely sketchy way, and they were able to use that and they believed that that was determinative. They didn't have a lot of good evidence of that. It was just one of the things that got attached to Obama's reelection.
I think since then people have just kind of assumed that online political advertising's incredibly impactful. Like Solomon talked about, they did show some effects, especially for early voters, that it might be effective, but the overall effects are not humongous. There's this ridiculous Netflix documentary about Cambridge Analytica and advertising and stuff, and we've had over and over again this media blitz on the idea that online ads can just reprogram people, and that was clearly overstated, but this is a stronger result than I thought of. This actually has made me rethink a little bit because I have been pretty aggressive about talking about limits to political advertising online. And if it doesn't matter that much, then I think it's still reasonable to have limits, but it's not as big a deal versus some other [inaudible 00:18:51].
Evelyn Douek:
Great.
Alex Stamos:
I do also want to say, I think this overall paper is really bad for big tech companies because if you're reading it's hard to think of, well then do other ads for just commercial products work? They didn't specifically address it, but it does raise that question of whether this whole post Cambridge Analytica bubble of believing that digital ads can just manipulate people. If you're the chief marketing officer at Toyota, that means you put a lot of money into digital advertising versus TV. And so that should be interesting. That's happening at the same time that Apple through their privacy moves are making it harder to measure the outcome of digital ads. So this is kind of a one-two punch against Google and Facebook especially.
Evelyn Douek:
Although potentially people have less strongly held beliefs about which soap they should use as opposed to which candidate they're voting for. But who knows? People feel very strongly about soap sometimes. All right, so anything else on those studies before we move on?
Alex Stamos:
I'm really glad to see it. These are two fantastic researchers. I just want to compliment the entire teams here and then congratulate them on gaining nature. It's great to see the editors of a variety of nature outlets be willing to do this kind of research even when it questions some widely-held assumptions.
Evelyn Douek:
And I mean, I guess just also to go back to the point that we opened with, which is that policymaking, law-making should be empirically informed, and this is a time of immense activity in law making and immense concern about these social media platforms and what's going on. And so it'd be great if this kind of research filters up.
I think it also makes the very strong case for the transparency data access laws that are floating around on the hill because these kinds of studies are extremely informative and extremely helpful in working out what we should do. That's the best place to start so that we can make the rest of our policy making informed.
Alex Stamos:
And to make a pivot, I think especially Tucker's work would actually probably be impossible under an API change that currently happened. Is it time for us to pivot that?
Evelyn Douek:
Fantastic segue. And with that... All right, that's actually the perfect sound for this segment, our weekly Twitter corner. As you just mentioned, Alex, the stories have been coming out that Twitter is cutting off API access for third party apps. You've said you think this is a pretty big deal. Why?
Alex Stamos:
Yeah, it's a big deal on a couple levels. Clearly, it is probably about money. Twitter has always had this love-hate relationship about third-party apps. They're one of the only major commercial platforms that have allowed third-party apps to exist. YouTube doesn't allow it. Facebook goes to incredible links to sue people. LinkedIn has a Ninth Circuit case, that's how far they went to prevent people from any third-party apps. So Twitter for quite a while allowed third-party apps, but it was always a challenge for them because these apps don't allow them to show advertising and sometimes don't send back the advertising, the user analytics that are useful for figuring out what are people doing? What are they clicking on? All that kind of stuff. The data just flows off and then nothing comes back. Nobody clicks anything. And so you end up providing a service completely and totally for free.
As long as third party apps were a small percentage of use users, it wasn't a huge deal. It's unclear to me what the percentage was. It couldn't have been that big, but Musk is clearly desperate for everything. But the flip side is this API, the streaming API for Twitter is used by a lot of researchers to do their research on what happens. I know of several university groups that have now gone completely dark. They have no idea of what's going on. They have to rebuild their systems to use some other APIs that still exist.
I think just like you said, that the empirical research is incredibly important for policy makers that Josh Tucker research can only exist because in 2016 there are APIs he was able to pull to see what people were seeing on Twitter. We're not going to be able to get that in five, six years from now if they continue to go dark. So I do think it is a great demonstration of why we need the PATA and equivalent laws around the world.
Evelyn Douek:
This was one of the big concerns that myself, many of us talked about when Musk took over, was Twitter really led the field both in terms of providing transparency and data access for researchers and also providing transparency around the effects of their content moderation interventions and it was really important stuff that helped inform the field more generally and whether that would disappear as Musk took over. This is one example. Another example, I mean, this is probably not mal-intent, but if you go to their transparency report page at the moment for their rules enforcement, this is like an industry standard report. It's actually pretty inane and not super useful for people, but it is something that every major tech platform puts out about how much content they've been taking down over the last six months. It says, "Data not found" or some other similar error message right now. So it's just not a priority for Musk. So I think that is really sad to see as this goes on.
Alex Stamos:
And I don't think there's a conspiracy and nefarious conspiracy there. I think he's lost all of his good people. He has cut a huge number of folks. There's a bunch of functions at Twitter like putting together that transparency data where it's just nobody's job. There might not even people who know that it used to be somebody's job. You're at the point now where you fired the people and then you fired fire the people who knew that those people existed, it's becoming some kind of crazy disappearing people, like you're trot skiing out entire functions at Twitter that nobody left in the building even knows that these are things that are generally done in the industry, which is for something like a transparency report, not great, but not terrible.
I'm really worried about InfoSec right now. I'm really worried about Twitter has a long history of difficult security challenges. They had a team that over the last year and a half or so was really busting their butts, and that team has completely evaporated. There's only a handful of people. So that's the kind of thing that you don't see from the outside until there's a breach or until there is a downstream effect of somebody going to jail because secret police were able to steal data from Twitter and that kind of stuff. I think all of these little things are indicators that things are really chaotic there right now.
Evelyn Douek:
It'll be interesting to see the downstream effects of this on the research in this area more generally. Whenever you went to conferences in this area, social science conferences, the number of papers on Twitter outnumbered the number of papers on YouTube, like 20 to 1, even though the user base is a fraction of the size simply because they provided more data. So it'll be interesting to see now what those conferences look like if there's a grand rebalancing or no one showing up. We'll see.
Alex Stamos:
No, and that's a good point, and that's a legitimate argument that Twitter had, but the legitimate argument is that people spend way too much time because the API's so good, right? We broke ourselves into jail, but it was still a good thing to allow people to do that research. Yes, a lot of people who just would pivot to Twitter on everything are going to have to be more thoughtful about getting data elsewhere. But the flip side is this helps create the incentive structure for other companies to break down. This is something that both of us mentioned right when Musk started, which is Mark Zuckerberg has been hating transparency for years of the ways that people can get data out of Facebook. If Musk is going to just completely shut people off, it does create an incentive for him to follow a little bit because whatever he does will not be as bad as Twitter and therefore would probably he could resist some of the yelling and screaming that otherwise would've happened.
Evelyn Douek:
Yeah. Musk continues to be Zuckerberg's best friend at the moment. Although Musk is of course still providing meaningful transparency, the Twitter files still come out. Honestly, is anyone reading these anymore? Oh, I'm sorry. This might be one of our last attempts to do it. The Twitter files.
Alex Stamos:
The Twitter files.
Evelyn Douek:
Still come out. Matt Taibbi is still publishing them. Are you finding anything interesting in them these days, or is it petered out in terms of controversy?
Alex Stamos:
I continue to read them partially to see what kind of death threats I'll be getting that night if I get a mention in an email. It is still interesting to see the inner workings of these companies. To me, it's much less surprising because I've seen how these companies operate. But it is interesting to see this kind of transparency.
One of the funny things about the Twitter files is that Yoel Roth, who I think is a great guy and has worked really hard to try to make Twitter safer while also trying to respect people's free expression rights. Yoel goes from hero to villain, hero to villain, sometimes in different tweets of the same thread, which, if you want to take a step back, which demonstrates that it's actually legitimately hard when you're dealing with the FBI, when you're dealing with Congress, when you're dealing with Brazil and Germany and all of these countries, even just the democracies, it is legitimately hard to figure out whether requests from these governments are good or bad.
And there's a bunch of stuff in there where Yoel, they now have internal emails where he is like, "This is ridiculous, is ridiculous." So all of a sudden he is a hero that is standing up. And then Elon is driving people with really horrible QAnon rumors about Yoel of retweeting those, right? So it's like it just demonstrates a complexity of dealing with these things that you and I clearly understand, but through the lens of, if you're Amy Klobuchar on the left, or if you are Glenn Greenwald on whatever he is, the right-ish, then you can take these things and just completely interpret them one way or another. Where the reality is it's like it's really fuzzy, in some cases, the FBI is giving Twitter accurate information about these are foreign influence operations. In some cases it looks like they overstated it. Certainly Congress has overstated. And I think it is good for people to see that if you working with these platforms, every member of Congress is continuously bugging your government affairs, people about content they don't like on social media, everybody, every single one of them, Republican or Democrat.
To go back to some of the transparency ideas that we've mentioned on this show, if Musk wants to release all the interactions with everybody with a House.gov and a US Senate email, then that's totally fine because then we can see all of the interactions and all of the requests. This selective leaking, while interesting, does not allow you to draw a good conclusion on the behavior of different political actors.
Evelyn Douek:
Moving through some of the other stories we have this week, the New York Times had a nice story over the weekend about public universities banning TikTok on their Wi-Fi networks. The students somewhat sad, but also laughing because they just turn on their data plans and access TikTok that way. What's going on here? Does this have any meaningful impact or is it just posturing when these universities ban TikTok from their networks?
Alex Stamos:
I mean, it's certainly posturing. I think it's a fascinating for us to discuss that. This raises a really interesting question that free countries are going to have, which is how do democracies with laws like the First Amendment, fight against products that people want to use from authoritarian states? That might be an overall geopolitical risk. We don't have a great firewall of the United States, and I don't think we should.
Preventing American citizens from accessing a product that comes from an overseas company feels very, very, very sketchy to me. You could speak as to whether it's actually a First Amendment violation, but to me, free expression is not just about what you can say, but it's about what you could access, and copying the Chinese a little bit by dropping IP blocks to block things that we don't like from other countries. It makes me very nervous. Like you said, it's silly in this case. It is totally legitimate if the federal government says, "This is a federal phone that you're using for federal business. We don't want TikTok on it." Totally smart move, totally legitimate. That's totally reasonable. To block IP addresses on an ISP effectively for teenagers, I think it's a step too far, and it's just not compatible with how we should fight. We should not fight fire with fire against authoritarian governments, as well.
Evelyn Douek:
I mean, it just shows how a lot of the legislative responses in this area are not particularly well thought out or carefully formulated. The reason that these bans are taking place is because all of these states, I think it's around 23 now, have passed some version of banning TikTok on government devices, government public networks, something along those lines. And as you say, there may be a really good case for saying let's ban it on government phone. But often these laws are drafted precisely, and public universities don't really know the extent of their liability so they're being prophylactic and erring on the side of caution. And that is when you then do really open yourself up to First Amendment challenges.
Yes, it is the teenagers who can't get their dances and they're totally protected speech. It's also the academic researchers, you and I probably both know plenty of academic researchers who are saying, "This has actually really impacted my ability to do my research because they are researching TikTok in part to work out what are the problems with TikTok." So that again, is absolutely activity-protected speech. So I think that the fact that these laws aren't carefully drafted, extremely over broad definitely makes them vulnerable for First Amendment challenges. No doubt we will see those coming soon.
Alex Stamos:
It does seem compatible with all these state legislatures censoring what people say in public universities, censoring people say in public schools, right? It's just of the same sensorial impulse that we've seen out of, honestly, local conservatives, that talk about free speech in lots of ways and then spend all their time in the legislature restricting the speech of people.
Evelyn Douek:
I did not mean to imply that I am shocked, shocked that these bills are... That these states... I'm not been adequately conscious of the speech effects of their laws. Just such an exception.
So we spoke about Stanford Internet Observatory. There was a big paper out this week with Renee Resta, our colleague, and open AI about the potential uses of chatGPT for influence operations. Can you talk a little bit about what that said?
Alex Stamos:
Yeah. Our colleague, Renee has spent a bunch of time with open AI thinking through what would really good large language models like chatGPT, how would they affect influence operations. It is very long paper, it is very complex, I recommend you read it. But as you imagine the answer is it doesn't make it better for us, it makes it easier for them. And I think this has always been the concern of a lot of folks is that it's less about just straight-up bot activity where you've got completely autonomous agents. Let's go back to Saudi Arabia, and instead of having to fill a building full of people who speak perfect English, who have an understanding of American politics, that have the ability to write editorials that look like they're really written by Americans. You can just be one guy who has that skill and then you can have chatGPT who will do all the work for you.
I think it's wonderful that Open AI does this research openly and proactively. I think that's really cool that they are trying to, as they invent these absolutely incredible AI products, they're also thinking about the downsides. The flip side is open AI, trying to put protections in place here is not going to matter for long because chatGPT is effectively going to be running on people's graphics gaming cards in a year. We're in a weird world where you have a handful of thoughtful companies that are thinking through the impacts while then the work they do trickles down into on-device systems really quickly. We saw this exactly with all the image stuff where Dolly and some other things made it so that you couldn't generate porn, you couldn't generate artificial CSAM and such. And then, those techniques were then turned into models that on the dark web you can go download all that kind of stuff and get really detailed instructions on how to use it.
I think we'll continue to see that trickle-down effect, it's great to do this research, but in the long run, you can't just ask open AI to put the protections in place. You have to get ready for a world in which this amount of content can be created pretty cheaply.
Evelyn Douek:
Following up on a story that we've talked about before, we've complained about Apple's lack of transparency with the App Store. Saw some semi exciting headlines this week about how they're promising more transparency in response to activist investors who have been upset with bands of certain apps from the app store, including, for example, Bible and Quran study tools and concerns about them being taken down as a result of government pressure. So there was some reporting that Apple has agreed to publish more transparency.
Back to the fact that a lot of the usefulness of transparency, the devil is in the detail. It turns out that Apple will be publishing the legal basis for removal requests by each government, but it will only be broken down by country and app category. And we won't have any information about individual apps. Maybe it's one step forward, half a step back, I don't really know what to make of this. I guess it's good that there is some sort of response to pressure for transparency, but I'm not sure how meaningful this will be.
Alex Stamos:
I don't think it's going to be meaningful at all because they're not going to list the actual decisions that are made. Right now, there are nonprofits that you can download from them, the list of apps that are banned in certain countries, especially China. The way they do that is they fetch the XML metadata for all the apps and the app source from a bunch of different phones configured in different ways. And then they compare them all to each other. In some cases, that stuff's missing, and that could be an intentional decision by a app developer to just not be in a certain country where they didn't internationalize it or whatever, they could uncheck the box that my app's not available in Morocco because I don't feel comfortable or whatever. But in other cases, it demonstrates really clear censorship.
I have said this before, A Apple has done more for the Chinese Communist Party than any other tech company has done for any other authoritarian state or party. One of the things they do is they just make stuff disappear from the app store so that Chinese people don't have to be bugged by things like the New York Times or VPNs or access to uncensored data, I think is a completely unethical thing that they do and this tiny little bandaid does not make it better.
Evelyn Douek:
And quick update from the legal corner, I think we're going to need a and a sound effect to introduce this from now on, maybe a gavel hitting the table.
Alex Stamos:
We'll work on it.
Evelyn Douek:
Hear ye, hear ya. In court action, it's impossible to keep up with how much is happening in the courts around this area. Getting in on the action was the Public School District in Seattle filed a novel lawsuit against tech giants. Basically every platform that you can think of saying that it has caused a mental health crisis among youth and seeking damages for all of the extra time that teachers and schools have to put into dealing with the fallout of this. When you see every headline calling it a novel war suit, that should give you some indication of chances of success. I think it's just indicative of all of the way that the law is being used and the posturing that's happening around these issues at the moment, no doubt platforms do need to be conscious of the effects, the mental health effects that they have among youth, but drawing a straight line between them and a teacher putting in extra counseling time is a little bit of a stretch.
Relevant to that is that the lawsuits trying to get around section 230 by arguing that platforms are accountable for the way that they amplify content, and that case we know is that the Supreme Court right now, it's being argued next month and this week Google filed its brief in Gonzalez arguing that if the court was to find against Google and find that Section 230 didn't protect platforms for content that they amplify, that the internet as we know it will break and the sky will fall, which it is both not a surprise that that's what they're arguing and also not too much of an exaggeration depending on how the court rules in this one.
From the Supreme Court, the Supreme Court, it's docket on free speech issues this term is nuts. My seminar is writing itself. It took another case this week, Kahneman v Colorado, which the synopsis of the case doesn't sound that exciting. It's about what men's REA do you need to prove a true threat that is unprotected by the First Amendment. But if you dig in, this is actually a case about a prosecution of someone for stalking on Facebook through Facebook messages. It was someone that stalked a local musician, sent her direct messages over the course of a number of years. This is being argued that this doesn't amount to true threats and therefore unprotected by the First Amendment because he wasn't subjectively intending to threaten actual violence.
This case is scary to me because it's being argued as a true threats case, but actually the statute underlying it is a stalking statute and if the court decides that stalking statutes need to be significantly narrowed and that the amends rare requirement needs to be a much higher requirement in that area, I think that that would be a step back in terms of stalking prosecutions online, which is a massive issue. So I'm worried about that one.
Alex Stamos:
This is a total wrong direction, we do not have enough laws that take into account what the impact can be, even just being stalked by one person. I have dealt with a number of situations, both at Facebook and since then in private consulting of individuals where one determined person who understands how the law works and how the internet works can make your life total hell. In the case of especially children and teenagers, this has led to lots of horrible outcomes include suicide. And that is just for one person. If you're in a situation where you are Dr. Fauci or anybody mentioned in the Twitter files or somebody who QAnon on is holding up as this person is a pedophile, groomer, then your life can be complete in total hell. And we need more laws that take into account the real world impact of concerted attacks against individuals. I hate the idea that the Supreme Court might wipe that out in a very undemocratic way and not allow states to come up with the laws to actually protect people.
The laws that we have right now are not working. The enforcement we have right now is not working to even raise the barrier, I think could have really harmful long-term effects in a way that is totally apolitical. This seems to me people on, I hate to say both sides, but it is not totally symmetrical, but it is true that people from a wide variety of things, and at the same time the Supreme Court is asking for these special protections where they themselves can't have their... They want to have it that it's illegal to post their home addresses. They want to have special privacy rights that aren't given to any other person in the United States and only given to federal judges. I think that they have reasonable points there. Honestly, if I was Supreme Court Justice and had kids, I would be really worried about them and the safety of my family. But that should hopefully give them more empathy to what everybody else faces, not less.
Evelyn Douek:
Knowing how the legal sausage gets made. It is the only saving grace may be that a number of Supreme Court Justices have recently been stalked, including online, and so that may inform their analysis of this case. I may be totally wrong. This may be an excellent pronouncement in a relatively unclear area of First Amendment law that protects stalking statutes around the country. It is truly amazing how much of the First Amendment and platform regulation is up for grabs on the court docket this term. So we'll see how much wreckage it can do.
In the meantime, if you're wondering if tech regulation was going to continue to be a political hot button issue, a number of bills have been introduced already. The Republicans into the house have introduced an anti jawboning bill, presumably in response to the Twitter files about government pressure on platforms to take certain things down. And then Biden joined the fray with an asinine Wall Street Journal oped about tech regulation this week with the wonderful phrase, "We must hold social media companies accountable for the experiment they're running on our children for profit." We started the episode with some optimism about empirically-informed policy making, maybe we ended on a note of pessimism.
Alex Stamos:
But without good empirical evidence, what chance do they have? I mean, they're clearly trying to do the tobacco. Everybody wants to be the tobacco class action lawyers. If you graduate from law school, I guess you don't decide you want to defend big corporations. You want to be a prosecutor and put criminals away, or you want to be public defender or you want to be a professor and to teach the next generation, or you want to be a class action lawyer. And if you're going to be a class action lawyer, you don't want to do slip-and-falls, you want to do the tobacco litigation, big world-changing.
But there was what, 50 years, 60 years, 70 years of very hard, empirical evidence on the impact of tobacco. Whereas all of these studies about the impact of social media are incredibly mixed. It does seem to me to be a little bit of a silly to try to front run this when you don't have all that evidence to check in. I mean, will it even get to a jury? Will a judge allow this to move forward unless they have lots and lots of real evidence to try to back up the creation of this class and everything?
Evelyn Douek:
I wouldn't bet on it.
Alex Stamos:
Which there's legitimate concerns here, right? I am very careful about my kids' social media usage. This is a common thing, the media loves to write, "Still combat executives, don't let their kids use their iPads for 14 hours a day." Yeah, I know. No shit. You have to be really careful about children using the internet. But then our responses should be targeted and thoughtful and based upon evidence not just these big press-grabbing, class action lawsuits, which don't seem to actually fix anything, right?
Evelyn Douek:
We will have to do an Alex's how-to episode on how to effectively make sure that your teen and children's social media usage is safe. But I'm sure that many people would be interested.
Anything else before we wrap?
Alex Stamos:
No. It's been a busy couple of weeks. I don't think 2023 is going to be a super chill. Now that the World Cup's over, we should probably turn this into a weather cast of it has been raining continuously here in California-
Evelyn Douek:
Exactly. And you're dealing with the lake this morning.
Alex Stamos:
I was dealing with the lake in the front yard, my arc is only half constructed. We'll see if we get there.
Evelyn Douek:
I hope I can finagle myself a ticket to the arc when the flood comes.
Alex Stamos:
Well, I'm going to have two of every kind of tech policy commenter, right? So I need two First Amendment lawyers. I think you can help me with that.
Evelyn Douek:
Fantastic. I knew it. I knew that the being a First Amendment lawyer would save my life one day.
And with that, this has been your Moderated Content weekly update. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent.
This episode wouldn't have been possible without the research and editorial assistance of John Perrino, policy analysts extraordinaire at the Stanford Internet Observatory. It is produced by the wonderful Brian Pelletier. Special thanks also to Alyssa Ashdown, Justin Fu and Rob Huffman. Talk to you next week.