Moderated Content

MC Weekly Update 7/31: It's Complicated

Episode Summary

Alex and Evelyn discuss some of the week's headlines, including Ex-Twitter's continued trust and safety, er, best practices and a baseless threat to try shutdown research it doesn't like, before being joined by Josh Tucker and Jen Pan, two academics part of a research partnership with Meta to examine the impact of Facebook and Instagram on key political attitudes and behaviors during the US 2020 election. The group released the first four papers this week and Josh and Jen discussed their findings and what they mean for platform design.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments. They’re joined this week by NYU’s Joshua Tucker and Stanford’s Jennifer Pan to discuss new studies released from an academic research partnership with Meta on the 2020 U.S. election.

The X Files

No Labels

Shutting This Down

Getting Meta on Meta

(Evelyn’s) Sports Corner

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Evelyn Douek:

Alex, the quarter is approaching. Are you getting ready for school this year?

Alex Stamos:

I am. I'm going to be teaching the hack lab again this fall with Rianna Effor.

Evelyn Douek:

Frequent correspondent. Yeah.

Alex Stamos:

Frequent correspondent, yes. We teach this intro to cybersecurity class, but I'm stuck because we teach these students how to hack. They're non CSS students. It, we have to have virtual machines for them to do it, and it costs $25,000 last year to teach us on Google Cloud. And I could spend that money without any oversight from Stanford. But to buy $30,000 of hardware and therefore not have to pay any more cloud costs, I apparently need a approval from the president of Stanford and-

Evelyn Douek:

Slight hiccup there.

Alex Stamos:

... that position is open unfortunately.

Evelyn Douek:

This from the university that I just discovered has a meditation center on campus, which I didn't know about, but I'm sure it's costs substantially more than $30,000.

Alex Stamos:

Probably. Yes. But good thing we have the meditation center and no online cyber range. So that's the current state. If you're looking for the US news rankings, maybe there is a number in the US news algorithm for meditation centers. One cyber ranges zero.

Evelyn Douek:

What do you do if you get hacked and go, men.

Alex Stamos:

Go find yourself.

Evelyn Douek:

Exactly. And with that, welcome to Moderated Content's weekly slightly random and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. Let's head straight to the X Twitter corner this morning. So just an all round great week in all time, excellent, trust and safety content moderation decisions here from-

Alex Stamos:

An amazing week. It's just every week gets more and more exciting.

Evelyn Douek:

Exactly. Just if you thought that there was a third rail that no one would touch, it's child sexual abuse material. It's the example that everyone gives as the content that everyone agrees needs to be taken down and is a bridge too far. Even the free speech absolutist. Well, this week, Musk reinstated an account that posted CSA after the in question had drawn more than 3 million views and 8,000 retweets, which it's hard to square that with his professed zero tolerance on child exploitation policy.

Alex Stamos:

The man who keeps on talking about how he has zero tolerance, how he got a bunch of people who really don't know what they're talking about to basically say that Twitter was all of a sudden had solved these problems very early in his time. We've been trending this direction. We did our research that we released in May that demonstrated a lot of problems with Instagram, but one of the things that we had found in doing that work was not only are people selling child sexual abuse mention on Twitter, but that Twitter's basic scanning system had broken, and Twitter then cut off our API access. So we can't actually verify what's going on, but in this case, this is not some accidental thing. This is not something breaking a conservative conspiratorial influencer posted a image that was taken from an infamous video of a very small child being sexually abused, and it was up for days.

It had millions of views. It was finally taken down and the account suspended per Twitter's policy, and then Musk intentionally made the decision to bring him back. So big exception now into Twitter's policies on child sexual abuse material is if Musk likes your conspiracies, then you get another chance. It's more of a three strikes thing if you happen to have the right political alignment for Musk. Shocking, and I think it does sound like I did not see the image, Musk said only our CSE team seen an image where it's like that in apparently 3 million people saw the image. As we've talked about here, there are a couple different definitions of what is illegal in the United States and on things as actual sexually explicit content including graphical sexual Congress with a child. It sounded like it might've had some black boxes on it or something like that, that is not good enough.

If you look at 18 USC 2256, which has the definitions, that would be good enough for lascivious exhibition. So if you just have a naked picture of a teenager and there's butcher black boxes, that can go into that testing, and that actually turned out to be an interesting problem in our work on Instagram, is that they were having effectively what you might see as a bikini shot of somebody under 18, but they're actually nude, but they're doing it with black boxes. That is not something that gets you out of the law. So Twitter, I hope, reported this to NCMEC because they are required to under 2258 in that same area of the law, they are absolutely required to report that guy to NCMEC. So we'll see what happens if there's any legal response here, but it's just a mean completely shocking decision by him to basically say people of a certain political bent get multiple chances to possibly post CSE.

Evelyn Douek:

And just to be really clear, intent doesn't matter. The ostensible intent of posting this image was to raise awareness about child sexual abuse, and I don't even know how that pans out to be helpful. But the re-victimization occurs just by the fact that the image is reshared and it does harm to the victim and it's just not at all necessary for the image to go up. So that ostensible intent doesn't matter. It's worth noting though, that the Washington Post said that the same photo was posted on Instagram, and while the photo was taken down much more quickly, it already got 600 likes of four being deleted. The same person, the same user does still have his account on Instagram too. So I guess there is just a difference in policy in how to handle it, but it's just hard to square this decision and the personal intervention by Elon Musk with their stated policies.

Alex Stamos:

Yeah, absolutely. You just can't say that you're totally against CSAM online and then reinstate an account. I don't know. I don't even know how we're supposed to do this podcast because it's just us reading the headline. I mean, what analysis do you need here? This man broke the law. Pretty much every platform has a very hard policy of it is one strike policy for this content. It's not a good idea. It's not a good idea.

Evelyn Douek:

In another, Alex is this good trusted safety behavior. Just to make doubly sure Twitter or X also reinstated Kanye West's account this week because, and I quote, "They'd received reassurance that he wouldn't use the platform to share antisemitic or otherwise harmful language." Now, just as a recap, West was banned from the platform last year for posting a swastika merged with the Star of David, and also that follow that was the final straw following a whole bunch of bigoted tweets including, for example, suggesting going death con three on all caps Jewish people. So really not ambiguous stuff here. This is pretty clear. So just checking, Alex, trust and safety best practice to reinstate accounts that breach the rules multiple times in pretty blatant ways if they reassure you that no, they're going to be better this time?

Alex Stamos:

Especially people who have clearly are in a mental crisis who have demonstrated that crisis by horrible anti-Semitic attacks. No, I would say that that would not make your recidivism problem better if you reinstate somebody who has broken the rules over and over and over again. It just shows the desperation I think of X and Twitter is that you have to have these people on there because they do drive eyeballs. The amount of engagement around crazy conspiracy theorist or Kanye rest, I mean people are going to tune in to see how is Kanye going to set himself on fire this time? That doesn't mean they should be carrying the video of him destroying his career and giving him a platform to make it easier for him to hurt himself, but at least people will watch. And if you're as desperate as they are for any engagement metric, then you do it.

Although it's interesting here in that they are not going to run ads next to his account, which I don't know what that means. Does that mean directly against or on the same page? I mean, that starts to get actually quite technically complicated. If you're going to say there's X space above and below this specific content from this guy that can't run an ad, that starts to, you start to have to really muck around in the bowls of the software. So we'll see whether that actually gets the ballots right up. We will see if that actually gets in full.

Evelyn Douek:

Yeah, no. I'm sure it's really well thought out and lots of forethought has gone into how that statement's going to be enacted. So it's okay though Twitter's on it. The "CEO," Linda Yaccarino this morning sent an email to employees and announcing a reorganization of the trust and safety group into three verticals, content operations and enforcement, legal demands and law enforcement and threat disruption. And she's also looking to hire a head of brand safety, which sounds like a thing that they need. So this will do it. This will fix everything.

Alex Stamos:

And the most fun job at X after "CEO," yes, is head of brand safety. Good luck with that. So yeah, it's going to be fascinating to see. We're going to find somebody whose personal ambition massively outstrips their good judgment is the only person with any qualifications, who would take that job.

Evelyn Douek:

Exactly. Okay. And talking of brand safety, can I get a legal news sound effect? Thank you. Although I don't know if we actually should use the legal corner sound effect for this because I think it's a mistake to interpret this as actual legal action rather than something more akin to performance art. Because this is the breaking news this morning, the New York Times had a story about a letter that Alex Spiro on behalf of Twitter or X Corp sent to the Center for Countering Digital Hate, which is a nonprofit that does research on social media and has written a whole bunch of studies for years about hate speech, et cetera on social media platforms including Twitter. And the letter cite a bunch of papers including one that, for example, found that Twitter had taken no action against 99% of 100 Twitter blue accounts that the center had reported for tweeting hate.

And when I first saw this headline, I thought that the action would be something that they were threatening would be something like defamation, which would've been equally ridiculous, but at least logically would make sense. But instead Alex Spiro is suggesting that the Center for Countering Digital Hate violated Section 43(a) of the Lanham Act, which prohibits misleading advertising. Yes, what's right? Misleading advertising. So just Galaxy brain stuff here. It's fantastic though because the center got Roberta Kaplan to write a letter for them in response. She's perhaps most well known now for representing Eugene Carroll in her lawsuits against Trump. But she's just a formidable and well-known lawyer well before that. And in her letter, she's just clearly having fun here. She says, "We write in response to the ridiculous letter you sent our clients, the threat that you claim is bogus and you know it. None of the examples cited in your letter constitutes the kinds of advertisement that would trigger the act. And this is a transparent attempt to silence honest criticism."

And then my favorite bit is she says, "If your clients do file suit, please be advised that the Center for Countering Digital Hate intends to seek immediate discovery regarding hate speech and misinformation on the Twitter platform, Twitter's policies and practices relating to these issues and Twitter's advertising revenue," which is basically just saying, let's go. You want to litigate this and litigate the truthfulness of our claims. Let's take it to court. So this is another fantastic move by the supposed free speech absolutist, Elon Musk. We have talked a bit about these kinds of studies before on the podcast and quibbles or disagreements with some of the methodologies that some of these different groups use, but there's certainly no way that this constitutes a violation of the Lanham Act or in any way fosters free speech to try and shut up this research.

Alex Stamos:

Yeah. So I can't speak other than it's just sounds like a laughable letter. The New Yorker did a whole profile of Alex Spiro and the whole thing make me feel like how long is Quinn Emanuel going to be embarrassed by this stuff. That effectively their logo will get stamped to whatever letter their celebrity clients want written? I guess the billings must be spectacular. It's probably the only bill that Musk pays on time must be Alex Spiro's bill. So I mean, one, there's just a question, Quinn Emmanuel, and when do 3Ls start to see this as embarrassing to have this as a summer associateship or a job stamped to their resume?

But beyond that, so this group, this is one of the groups I was thinking of when we talked about previously of effectively activist groups that dress up their reports as research. They point out things that actually happen. I have never seen any evidence of them inventing anything. They do make, I think from my perspective statements that can are not backed by the evidence and certainly are not peer reviewed or using any criteria or standards or methodology that can be observed by others and proven by others and complete in total contrast. That group is a complete opposite of the people we're going to talk about later today on this podcast.

So as you're listening to this podcast and you listen to some real social scientists join us, that is what the level of rigor you really need to make the kinds of statements that CCDH makes. That being said, they should not be sued for that. They have the absolute right to criticize the things they do see on Twitter. And again, they make a bunch of qualitative statements that are not quantitative, that are accurate of we saw this hate speech, we saw this activity. Certainly lots of people feel that things have gone worse on Twitter.

I certainly feel that way. I try to be very careful not to make big quantitative sweeping statements, but whether you make a statement that's backed by hard evidence or not, you should be allowed to criticize Twitter and the way Twitter can fight back against this, if Twitter wants to have good research here, there is a solution that other platforms have figured out, which is, you open up your APIs and you open up your data for good quantitative research by people for whom the quality of their research and the ability to get peer review are actually part of their professional motivations not being an activist.

So this is an exact demonstration of why Facebook went through all of these cycles over and over again and has ended up in a place where they share lots of data with researchers either via API or via some other means. Again, we'll talk to our friends today about the research they did and how Meta opened up the data vaults to allow that to happen. But that is how you get, if you don't like this criticism, then you want to have much more reasonable takes on what is going on. Then you have to provide good data to people who actually care. So Twitter has created this problem for themself because the activist groups are not going to care about your API restrictions. They're not going to care about the fact that you cut them off. They'll just go and either write scripts to scrape your site or just have people go and look and do descriptive criticisms and it's not going to stop.

So anyway, I know Linda's a listener every week, but Linda, this is exactly why other companies have gone through and created data sources for academics and since X keeps on speed running every mistake of the 20 teens on other social media sites, this is one that you can try to fix pretty quickly by allowing other people to look into then put out their own paper saying maybe hate speech isn't that much worse. Or maybe these numbers aren't bad. I think probably actually the numbers are pretty bad, but if you really do believe that they're not, then have other people do work that is peer reviewed that says so.

Evelyn Douek:

Rather than just writing tweet threads saying, "No, no, no, we are really, really great. Just trust us," and bogus threatening letters to the activist groups.

Alex Stamos:

Well, and this is absolutely going to make this research much more famous. Writing a letter and threatening them, way more people are going to read these reports and see these claims. So it's just tactically a stupid idea as well.

Evelyn Douek:

A few more quick hits before we get into the interview that that was just teased very quickly on Threads this week. Excellent. I love that sound of boxes being ripped. So just a story from the Wall Street Journal this week pointing out that Meta is not labeling state media on Threads like it does in the same way on Facebook and Instagram. So for example, if you go and check out RTs account on Threads, it's got a verified account and it's not at all labeled now. It also has been dormant for three weeks. So I don't think this is exactly a massive propaganda threat that's going to bring down American democracy, but it is, I think just a small story that shows, I think that this platform was spun up very, very quickly without all of the same trust and safety infrastructure that its other platforms have. And even though the Instagram community guidelines technically apply to Threads, there's a lag in the ramp up on enforcement and hopefully that's being prioritized internally rather than just another example of moving fast and breaking things.

Alex Stamos:

So Threads is still missing a lot of stuff. I mean, they're moving pretty quickly considering that they're now building on a base of like 100 million users, although the number of active users has apparently dropped, which is not shocking. It would be interesting to see a graph of daily actives in the US of Threads versus Twitter. I'd love to see if you could get any reasonable data out of Twitter, which you can't at this point. But they still are missing some big things they, they've shipped like chronological timelines and such. But there are these basic trust and safety features and affordances that were built over years and years on Facebook and Instagram that have not been ported over yet. And I do hope that they get to those expeditiously, things like this labeling, I think is a smart part of the community that they say they want to build. They want to build a trustworthy community where there's less disinformation, where people are nicer to each other.

And I think there's basic features that already exist in Instagram and Facebook that have been proven to be pretty effective there. And this one on labeling state media folks should understand if they're talking to somebody, is that person paid for? Yeah. Just a bit of the history here, one of the reasons these state and media labels became very important on these platforms is that in reaction to the research about what happened in the 2016 election, so around the 2017, 2018 timeframe, you had the big companies, including at the time Facebook shutting down the accounts of troll farms. And then what would happen is a bunch of that activity by authoritarian states most in particular Russia and China moved to the official state media and then moved to people who were hiding their affiliations with state media. So in the 2018 to 2020 timeframe, you saw an explosion of accounts where you would, Russia today or CGTN would create a subsidiary, create a subsidiary that would then pay an influencer.

So nobody would know that this person who is traveling through China and just talking about how they don't see any camps when they travel through Western China, everything seems fine and people are super happy to be under the thumb of Beijing, that those people are really being paid by the Chinese government. So these state media labels are not just about Russia today and CGTN and the Global Times and the obvious state media, although Global Times, apparently some congressmen don't know but that's Chinese state media. But it is really about the individual influencers and the sub-brands that try to obfuscate their relationships, which is always a hard thing. But Facebook has an entire team whose job it is like to research the state media stuff and they've done a pretty good job of finding those influencers and finding those economic ties.

Evelyn Douek:

Great. Finally, I just want to give a quick shout out to a letter from Access Now and 65 other civil society organizations that were pushing back on the Thierry Breton comments we spoke about a few weeks ago where he suggested that the Digital Services Act at the EU is ramping up on enforcement could be used to shut down platforms in times of civil unrest in response to what was going on in France and the protests in France happening at the time. The letter notes, how these organizations have been pushing back against exactly this rhetoric and internet shutdowns around the world for years from authoritarians. And it's a favorite tool of authoritarians and that statements like this from the EU and ostensible democracies don't help their cause. So far I have not seen any response from Thierry Breton. It's just a very disappointing state of affairs that these kinds of comments are coming out of Europe at the moment and just want to give kudos to these civil society organizations for calling it out.

Okay. And now to the meat of the episode. So the big news this week was the release of the first four papers coming out of a huge partnership of the US 2020 Facebook and Instagram election study, which was a collaborative effort between Meta and a team of external researchers that started in early 2020 and this week after years of work, we had the first four papers published coming out of that partnership in Nature and Science. And we're really lucky today to be joined by two of the researchers working on that. So we're joined by Josh Tucker, a professor of politics at New York University, and co-lead on the partnership with Meta and Jen Pan, a professor in the department of communication at Stanford University and a co-lead author on two of the studies. Josh, let's go to you first and can you give us a high level overview of the findings from the studies and the papers that were published this week?

Josh Tucker:

Yeah, thanks Evelyn. Thanks Alex. Thanks for having us here today. It's a pleasure to be talking to you guys. So there were four of the first four papers from the project were published in Nature and Science this past week. There's a ton of findings to dig into in those papers, but I want to highlight three big picture findings that have come out so far across the four papers. The first is that algorithms are clearly extremely influential in terms of what people see on the platform and shaping their on platform experiences. The second is that we did find significant ideological segregation in terms of exposure to political news on the platform. So there's a lot of news that's seen primarily by liberals, a lot of news that's seen primarily by conservatives. And then the third big finding is that despite the fact of, we saw the first two things that you can change aspects of platform affordances, that you can change aspects of the algorithm and it has a big impact on people's on platform experience.

And despite the fact that there's this ideological segregation and exposure to news on the platform, we did not find much of an impact of changing a number of different platform affordances, which we'll talk about going forward. But this include reversing people going back from the normal algorithm that they see on the platform to a chronological feed, which is what it used to be in terms of showing up what shows up in their feed. It included having a study where people did not have access to reshared content to try to get at this question of virality. And it also included demoting access to content that came from politically like-minded sources, reducing the prevalence of that in your feeds. And across all three of these experiments, we didn't see much of an impact on people's attitudes off the platform. And that included things like polarization on issues and I what we call affective polarization. So those are the three big takeaways.

Alex Stamos:

So Jen, do you mind diving into the details on the two papers that you're the co-op? So first I want to say congratulations on these little tiny nature and science, these crap little journals that nobody reads, but also these are the most impressive bylines I've ever seen on social science research papers. It is like the entire who's who of people I respect and whose feeds they like. So I strongly recommend to everybody who listens, anybody who listens to this podcast is qualified to read these papers and it is a great way to find the people that you should be following and you should be reading other papers by. But Jen, do you mind diving into the two where you were one of the lead authors?

Jen Pan:

Yeah, definitely. And thanks Alex and Evelyn for having me. So the two papers that I was lead author on are the reverse chronological feed and holding out reshared content. So reverse chronological feed, this was conducted with over 23,000 consenting participants on Facebook, over 21,000 consenting participants on Instagram. And we randomly assign a subset of those users to get their feed in reverse chronological order over a three month period from September to December of 2020. So what we find is, as Josh mentioned, being in reverse chronological feed substantially decreased the time that people spent on Facebook and Instagram. So decreasing about 20% on Facebook, 11% on Instagram. And in cases we see-

Alex Stamos:

Did Meta send you a bill for the 20% less for these 20,000 people because that is a measurable amount of money for that company.

Jen Pan:

Thankfully not. Especially because this did lead some of those participants to use other platforms. So on mobile we noticed that IG users who got chronological fees spent more time on TikTok and YouTube mobile. On chrono feed for Facebook, we noticed people went to Reddit and YouTube. So this is on browser. Engagement on Facebook and Instagram also decreased and it did change what people saw. So one really important thing is that being in chronological feed decreased exposure to content from like-minded sources on Facebook and it also increased exposure to political content, political news.

Another aspect of being a chronological feed, which potentially is unexpected, is that it doubled the amount of content from untrustworthy sources on Facebook and increased this content by more than 20% on Instagram. But on the other hand, in chronological feed you see a decrease in exposure to uncivil content. So I think one takeaway for platform design is that there are always these trade-offs. So for chronological feed you are reducing uncivil content, but it's going to increase content from untrustworthy sources.

Alex Stamos:

Do you have a theory on the mechanism there of why you see in chronological more untrustworthy but also apparently more civil content?

Jen Pan:

The thing that we came to understand as we're doing this work is that so many different aspects of the experience are changing because of this change in the feed algorithm. You're changing how much time people are spending on platform, you're seeing who they see content from. So in verse chronological feed, you're seeing more content from groups and pages relative to friends. On Instagram, you're seeing less content from mutual follows, so less content for people you follow and follow you. So there's a change in the network structure and you actually see less content from content from a smaller proportion of your network in chronological feed than you do in the regular default algorithmic feed ranking.

Alex Stamos:

And that effect was stronger on Facebook than Instagram?

Jen Pan:

Which finding?

Alex Stamos:

On both the civility and the amount of untrustworthy data?

Jen Pan:

Correct.

Alex Stamos:

Yeah. So I mean think that's fascinating. I think it also makes a gut feel here is that one of the interesting things about Facebook is the newsfeed you see is actually fed from products that are very different affordances. Of like the groups that you're part of, public groups, private groups, individual and posting. Whereas Instagram is more just like X or now Threads or some other platforms is more flat of you're not part of private groups, for example on Instagram.

Jen Pan:

That's right.

Alex Stamos:

Yeah. So it's just interesting from that. So Josh, I'd love to talk a little bit about the mechanics here. So doing this work required both access to data as well as to the ability to actually do experiments. I think that's one thing that was not really well understood, but this is the first time in many, many years since some negative experiences Facebook had with social science experiments where you guys had the ability to actually affect what people were seeing. So one, can we talk a little bit about the ethical and legal guidelines there and how do you see this as why is it necessary to do that to this work?

Josh Tucker:

So let's take the first one. What we did here was, as Jen mentioned before, all of the changes to people's feeds were done for people who consented to participates in studies where they were recruited, asked if they wanted to participate in the studies, they were compensated for their time for participating in the studies. They took surveys over the course of the study, but they also had consented to the possibility that their feed experiment would experience would be different on these platforms. So I think that completely changes the ethical framework for the study here in the sense that this is not something that was being done without people's consent. It was done with people who did consent.

We also followed standard procedures around internal review boards. The experiments themselves, there was a internal review board IRB at Newark that approved all of the experiments in which all the modifications were sent to and went through normal IRB procedure. The university researchers who were working on this project, the external researchers all checked with their own IRB boards to see what was needed in their case to be able to access this data after the data had been collected. So that was the way we approached this. We tried to think about very carefully about what was the best way to do these kinds of studies. And we had tried to follow the best practices in approaching these kinds of questions and doing this research.

Now the question of why one wants to do this is super important because in the US 2020 project, we have essentially two different types of studies. So we have studies that are done with observational data and that involve looking across all adult users in the United States on the platform. But those particular studies, we didn't make any changes to what was people's experience. And the academic research team didn't have access to any individual data. We just had aggregated data to protect the privacy of the users on the platform. And that allowed us to have a broad view of what was happening on the platform and to look at a whole set of questions that are really important, that were really interesting to observe.

The other two papers from the project besides the one Jen we're talking about, one of them was entirely observational, the ideological segregation paper. The other one, the like-minded paper had an observational component to it and an experimental component. So the advantage of the observational papers is we could get a broad based view and understand what was happening on these platforms at the time of the 2020 elections. However, it's very, very difficult to establish causal relationships with observational data, as I'm sure many of your listeners are aware. So we won't go into in great detail of the challenges of this.

So if you want to try to get at causal relationships, the gold standard in social science research now is to do experimental research where you can randomly assign some of the participants in your study to be in a control condition and some of the participants in your study to have the change made to their feed that you're interested in understanding. And then you can look at the differences between those two groups to understand what the impact of that change was. So that's why we wanted to have both observational studies at the larger level, but aggregated to protect the privacy of users and then to be able to do these experimental studies that had individual level data, but the users had consented to participate in those studies.

Alex Stamos:

So you had these overall IRBs and then every university had to get involved. For our listeners who don't have to deal with academic politics, IRBs are these groups that look at the ethics of research. Unfortunately at Stanford in the seventies, we had a history of locking undergraduates and basements, which has led to Stanford having a very aggressive IRB but they're looking over it. And I guess one of the findings here is that you didn't actually change people their feelings that much anyway about politics. So the experiments in the end, because a lot of times IRBs want you to cure something, but it doesn't feel like there's any real things that had to be... You did change the experience of folks, but it seems like it had minimal effect on the level of polarization and their voting patterns and such. Is that fair to say?

Jen Pan:

Yeah. So I talked about the chronological feed paper and the other paper I worked on. We held out re-shared content from people's feeds and both of those interventions we don't see any change in terms of issue polarization or affect of polarization. But I do want to say that in the re-shares experiment we're, what that did was reduce the proportion of political content, news content that people saw in their feeds quite substantial in the case of news and a decreased news knowledge within the sample during the study period. So people were less able to identify events that were happening at the time like Amy Coney Barrett being appointed to the Supreme Court or there was a plot to kidnap Michigan governor Gretchen. So they were less able to identify these news events. So we do see a decrease in news knowledge when we hold out. But in terms of polarization, neither intervention, which are significant changes.

Evelyn Douek:

Yeah. So I mean some of these studies are going to fit right into people's preconceived ideas about how this works. For example, your findings that the algorithms were extremely influential and that there's significant ideological segregation or that the chronological feed and the re-shares turning that those on and off respectively decreased time on site. But then you're also going to have these counterintuitive findings that it increased the level of untrustworthy knowledge or sorry, content or for example, that it decreased people's access to news. But I suppose probably the most counterintuitive finding or the one that created the biggest waves was this lack of effect in people's polarization or effective polarization.

So I just want to ask you about that. First of all, why? What do you think the mechanism is there? And you talk about this a bit in the paper, but it'd be great to talk that out. And then the second question is, does that let platforms off the hook then? If you can make all of these changes to platform design and it can have basically no effect on the thing that people are worried about, whether they should be or should not be. That's a separate question, but if people are worried about polarization and changing these things don't affect that, does that mean that platforms basically are absolved of responsibility here?

Josh Tucker:

Yeah. Across the three papers, I think we want to be really careful about what we have learned and what we haven't learned from these. So we did what was quite long interventions by social science standards. We altered people's experiences on these platforms for three months. We did it in ways that was motivated by theory, motivated by questions that were out there, people are interested in virality, they're interested in echo chambers. They're interested in the effect of the algorithm. So we went in there seeing if we change these things for three months in a very politically consequential time in US politics, whether it would have the predicted effects that people thought these kinds of things would have. What we found is that over a three month period, there was almost no effects on attitudes like polarization, ideological extremity, effective polarization. You can dig into the papers to see which papers tested which of these mechanisms, but overall altering people's experiences on these platforms had little effect on their attitudes off platform in their own heads at the end of the three month period.

So what I take away from that is that unfortunately, I think at this point there's difficult to say that there are these quick, easy fixes that we could have in our toolbox at the time of politically contentious moments, at least in the United States. I as a comparativist, I'm interested in seeing how this would play out in a less bipolar political system in a system with multiple parties. But in the context of the United States, the idea that, okay, the 2024 election is coming up, we're going to dial down part, we're going to dial down polarization by putting Facebook back on a chronological feed. We now have some evidence to suggest that at least in 2020, that a change didn't matter for the people who were enrolled in our studies.

That being said, which is segueing into your second question, Evelyn, does this let the platforms off the hook? No, it doesn't let the platforms off the hook. It does suggest that there aren't these quick and easy fixes, but there are lots of caveats here, lots of things we can't speak to. One of which is when Jen and her and her team put people so that the people in the study didn't have access to reshared content, they were still interacting with people on the platform, other people who weren't in our studies who did have access to reshared content. So we can't actually know if this finding would extrapolate if the entire platform changed. But there are really difficult research questions about how you would actually ever identify that an effect.

We also don't know what would've happened if we had done this. I think I said this earlier in the podcast. If we had done this during a less politically contentious time in US politics, it's possible that we might find different effects if we did it not at a period of time where people were being inundated with information about politics from so many different sources. It's also possible that if we did it for a longer period of time, if we change people's experiences on these platforms from now until the 2024 election, that might have a different effect than three months.

And then most crucially to this question, does it let the platforms off the hook? What we really don't know, and the question that I think some people want answered is, would the world today look differently if we didn't have social media that was an engagement based model that'd been around for the last 10 or 15 years? We can't really speak to that at all because all these experiments are occurring in the context of a world where there has been social media for the last 10 or 15 years. So when we get to the big question, does this let the platforms off the hook? It's not a question of letting the platforms off the hook.

There's so many more questions that are out there to be answered. What it does tell us is some potentially proposed easy fixes to try to deal with. Aspects of US society are, in this particular case, didn't seem to have any of an effect. I also think what it really tells us is that we need to be able to carry out this research going forward in the future. This was at one period of time in one country, and although we ran a number of these different interventions and we've learned a lot more than we ever knew about this before in the public, it's still only the tip of the iceberg. There's lots of interventions we didn't run, lots of things that one might want to know about.

And I think it really points to the importance of having mechanisms in place to ensure that this research, which is crucially in the public interest, like now we're talking about this, now the mass public knows about the findings from these studies. There's so much more we have to learn. And I think it points to the importance of really, while DSA is being benevolent, I don't know even how to tell you. While DSS A is being discussed in Europe, some of the interesting things I've seen on Twitter after the studies come out, I've seen people in Europe being like, "Wow, we're realizing now DSA is going to allow us to these observational studies, but look what happens when you can do causal studies as well." So I really hope that what we've done here feeds into the conversation and points to the importance of this research, even if this research only tells us a first step.

Alex Stamos:

Yeah. And it also feels like another limitation is, if I may, you're both asking people to volunteer for this as well as the inventions you talked about is at the normy social media experience of the normal folk who person who wakes up and gets on Facebook and they see some political content and maybe it makes them angrier or not. It's much less about the much more concentrated issue that we have where you have self-selecting groups, you have the QAN honors, you have people who are super aggressive about COVID on either side, who self-select into groups, who are then intentionally there deciding that that's the conversation they're going to have and radicalize themselves. It feels like that wasn't covered by any of these papers.

Josh Tucker:

Yeah. Alex, I think it's a great point, and actually in the three plus years we've been working on this, we've had other projects that have come out of my lab that are not part of this project, which keep pointing towards the same thing that we can do all these studies on trying to look at average effects across the population. But more and more we're seeing that we want to understand what's happening in these platforms across average effects across the whole platforms. But we also want to know what's going on in these smaller concentrated communities. I've been talking about the importance of tails based research. That's not what we were doing here. This was a first chance to do this. We tried to figure out effects at the population level. One thing about... Yeah, and you raise a good point about the fact that maybe Jen wants to speak to this.

You raise a good point about the fact that yes, these studies, because the trade-off for doing it in a way where only consenting participants are in the studies is that you do get the types of people who self-select into these studies to take part in them. That's why we then randomize, which should take account of the fact that the effects that we find are of the differences in this population. But we did in our studies, and Jen can speak to this, we looked at population average treatment effects, which is essentially trying to weight the sample when we do the analyses to get us an estimate of what the effect would be if our group resembled the full population. That doesn't get into your point about normies versus the people in the extremes, but it does get to the point of trying to think about, we were very cognizant of the fact that these were not perfect representations of the Facebook or Instagram users.

Jen Pan:

Yeah. I just want to say this is for papers. There are more papers and projects that are in the works, and some of them I think may be able to speak more to the tales of the behavior. But I think ultimately it's this point of trade-offs, not only what content are you trading off, but what proportion and exactly what segment of the users are you potentially dealing with when you implement something like chronological feed, which has differential effects in different populations

Alex Stamos:

I mean one of the great contributions here is that if somebody was building a social media platform right now, which people are still building these, we were just talking about Threads, which has only existed for a month and already has tens of millions of users there. There's finally good quantitative evidence of the decisions you make of what it does to people, even if it doesn't change who they're going to vote for or the level of political polarization. It does have serious effects on the kinds of content they see and what their experience is on the platform, which is a less globally important. But if you're designing a platform, it's very important things to be thinking about.

Do you have specific recommendations, Jen, that you would come out of this? In doing this work, one, you were unpolluted from ever working for social media, so you were allowed to move through certain halls for which I am always excluded as somebody who has not done this work. Did one, to change your mind about some of the decisions that were made and if you were, say you went into, you decided, "Man, Stanford, this place is getting a little boring," and you decided you're going to roll in and work at a company. Would it change the decisions you would make or the suggestions you would give to executives about the kinds of things that they should be considering?

Jen Pan:

I think the big takeaway for me is what Josh said, there are no silver bullets, simple solutions. You make a seemingly simple change like chronological feed, but it changes so many things about the experience of users and how then they use the platforms or not use the platforms. So for me, if I had any recommendation, it would be for platforms and more platforms to make more of their data accessible and transparent to researchers so we can continue to dig in on more of these dynamics to understand the mechanisms of what's going on.

Alex Stamos:

Because as you pointed, say, you're driving people to TikTok, you don't have any visibility then to the content they're seeing. So one of the things that might be happening here is that people want to get certain kinds of content and they're going to the places where they get it. But that's a hard thing for you to tell from this study.

Josh Tucker:

I was going to say there's one interesting finding in exactly that regard, Alex, in the like-minded paper. So this was the paper that appeared in nature and this was trying to get more at, again, trying to get at the echo chamber question here. So in the study there was an observational component where they look at how much content people are getting from politically like-minded sources, and then there's an experimental content component of it, which happens with what happens when you reduce the amount of content you're getting from politically like-minded sources. So this is getting at the echo chambers part of it, and there's one super interesting finding that's buried in there. So when they're or not buried in there. That's in that paper, which is that when you reduce the amount of content that people get from like-minded sources, the rate at which they engage with politically like-minded content actually goes up.

So there are so many things that seem like they make sense intuitively, but as Jen was saying, when you actually go and do these analyses, you learn about these other impacts of the changes that you're making here. And one thing that I've taken away from this is platforms must be doing this research all the time as they think about what changes they want to make. I mean, Evelyn, this is your value, what changes they want to make to the platform and what sorts of things they want to do. And when we think about the number of people who are accessing content on these platforms. These are very complex relationships with the public and policy makers really need to know what the implications of these decisions are. And to me, this is hopefully one of the lasting legacies of this project is that we've now seen it is possible for the public to learn these things. It is possible for us instead of just intuitively thinking, well, if we reduce the size of the echo chamber, this is what's going to happen to actually have some evidence on it, but we need more.

Evelyn Douek:

That's great. Extremely complex systems seems to be the key line takeaway. Turns out humans and society and social media platforms, it's complicated. So thank you very much both of you for this work. I'm sure it was an epic effort and it's really, I think, contributed to the public debate and for coming in, talking to us about it. Okay. I know we're running a little bit long, but I cannot let our listeners go without the update that they've been waiting for of the calf update from Sam Kerr.

Alex Stamos:

The calf update.

Evelyn Douek:

The calf update. The muscle on an entire nation's hopes rest.

Alex Stamos:

Yeah. It will reuse the sound for the tearing of ligaments in muscles. There you go. Okay, great. So what is the update? How are the Matilda's doing in the Women's World Cup? And how is Sam that specific Matilda doing?

Evelyn Douek:

Yeah. Well, thank you for not letting me pass up the opportunity to give a big boast that this morning the Matilda's beat Canada four nil in it.

Alex Stamos:

Take that Canucks

Evelyn Douek:

Exactly. In a do or die match to advance through to the final 16, the calf that could was apparently match ready, but we fortunately did not have to pull it out for the game because the Matilda's played so well without their star striker. So Sam Kerr was just sitting on the bench cheering wildly and she gets to rest her calf for one week more. And just in an inspiring note, a record number of tickets, 1.5 million have been sold making this year's World Cup, the most attended women's sporting event ever. So that's very, very cool and super exciting to watch.

Alex Stamos:

That is awesome. We gave a shout out last week to Jameel Jaffer, who's Canadian. So now we have to pull it back, Jameel. Take that on behalf of the best of the former colonies. Is there a legal, you guys still tied together Australia and Canada other than some of the five eyes and such?

Evelyn Douek:

I feel like there was no good answer to this question diplomatically. I mean we participated in the Commonwealth Games and-

Alex Stamos:

They're both commonwealth countries. So it's possible that this will be the end between the death of the Queen who is a famous proponent of the Commonwealth and now this drumming of Canada that the entire commonwealth might fall apart. So let's see what the geopolitical shock will. So will come out from this complete drumming four zero, that's brutal for football score.

Evelyn Douek:

Extremely proud. The tires are fraying, but totally worth it. So I'm looking forward to watching the replay later today. And with that, this has been your Moderated Content weekly update. The show is available in all the usual places, including Apple Podcasts and Spotify and show notes are available at lawatstanford.edu/moderatedcontent. This episode wouldn't be possible without the research and editorial assistance of John Perrino, policy analyst extraordinaire at the Stanford Internet Observatory and it is produced by the wonderful Brian Peltier. Special thanks also to Justin Fu and Rob Huffman. Talk to you next week.