Moderated Content

MC Weekly Update 1/9: New Year, Same Trust and Safety Issues

Episode Summary

Alex and Evelyn kick off 2023 with a rollicking tour through stories about: Meta's $400 million EU fine for privacy violations (cameo by Daphne Keller), Google's new appeals process for users flagged as uploading child sexual abuse material, WhatsApp introducing proxy servers, Twitter reinstating political ads, Meta's struggles to get out of politics, an Oversight Board decision about Iranian protests, information from the Jan 6 Committee about their findings to do with social media, and what's going on in Brazil.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Alex Stamos:

You teaching yet?

Evelyn Douek:

No. This is the best job in the world. I don't teach until April, so I'm just thinking really hard. I swear, I'm thinking super hard, intelligent, sophisticated thoughts. That's what I'm doing.

Alex Stamos:

I can feel that just flowing through the podcast right now.

Evelyn Douek:

Hello, and welcome to Moderated Content's weekly news update from the world of trust and safety with myself, Evelyn Douek, and Alex Stamos. Happy New Year everyone. My New Year's resolution this year is to try and talks slower for you, so we'll see whether this is like everyone else's New Year's resolution and is over by the end of this episode. But I'm initially committed, so we'll see how we go.

Alex Stamos:

Yeah, mine was to podcast less. So I've already violated it just today.

Evelyn Douek:

If only the world will help us with slowing down with trust and safety news. Alas, the good news never stops. New year, same old trust and safety news. We are going to start today with the headline that Meta was fined 390 million euros, or around $414 million by European regulators last week for violations of the General Data Protection Regulation, affectionately known as the GDPR. Now, the GDPR makes my brain hurt, so we got Daphne Keller, our colleague at Stanford, and director of the program on platform regulation at Stanford Cyber Policy Center, former general council for Google, and all round platform regulation encyclopedia, to tell us what is going on and why it matters. So here's Daphne.

Daphne Keller:

European regulators have fined Meta almost 400 million euros, which is a lot of money even for Meta. But more importantly I think, part of that fine is about a conclusion that the way Meta collects and uses data for ad targeting is illegal. What that means is this fine isn't about a one-time data protection peccadillo. It's not even about a really big screw up, like Cambridge Analytica. It is about the actual foundation of the business. There's two things to know about it, one is about the actual law, and the other is about the EU politics around enforcement of the law. The law is the general data protection regulation, the big overhaul of EU data protection law, or Americans might call it privacy law. Affecting, among other things, tech companies. Under the GDPR, you have to have a formal basis for processing the data that you get from your users.

There's an article in there, article six, that lists the specific permissible bases. One of those bases is consent. And this is what Facebook used to say about targeted ads, is that by using Facebook Blue, the main property, users are consenting to share their data for targeted ads. Along about the time the GDPR came into effect, Facebook changed that and said, no wait, this isn't based on user consent, it's based on a separate basis under the law. Which is that the data is necessary for the performance of the contract that the user entered into. The details of those things get deep into the GDPR weeds. And a lot of what this ruling is saying is, no, what you're doing, Facebook, doesn't qualify for the details of what we consider to be consent or an adequate contractual basis for processing data.

But if you pull out and look at this as a really big picture legal and policy question, Facebook is saying, look, we have a product. The product involves an exchange of data that allows us to make money for us giving you this free service, take it or leave it. And the regulator is coming in and saying, nope, you cannot offer that deal. It cannot be a take it or leave it option. You have to offer a version of the Facebook service that is not supported by targeted ads, that makes money some other way. And I got into it on Twitter with a bunch of data protection experts over the past few days about exactly what the limits of that authority are. When is it the role of a privacy regulator to say, this is a bargain that you cannot offer? You have to offer a different bargain, you have to have a different foundation for making money, or a different deal that you are offering your users. I think this is a deep question, and this case is an important step in working it out.

The institutional power piece for the EU is, as you probably know, Ireland functions for tech companies in the EU the way that Delaware does in the US. It is a place perceived as being relatively friendly in regulatory terms, a good place to have your establishment and main place of incorporation for business in Europe. That perception about Irish regulators as going easy on American tech companies is not making them popular in Brussels right now. So you're seeing a whole lot of moves to try to centralize more authority in Brussels, or for other EU countries to assert more authority. We see that in the DSA, the Digital Services Act, which situates an unprecedented amount of regulatory power over tech companies in Brussels with the EU government instead of with any national government. The GDPR has a different model. The GDPR has primary authority in the country of establishment, so in this case in Ireland.

And in this Facebook case, the Irish Data Protection Authority said that they did have an adequate basis for processing the data, that Facebook was right about being able to offer targeted ads on this basis. But the GDPR has a mechanism for other regulators in other countries to disagree, and for that disagreement to ladder up to a centralized authority in Brussels, the European Data Protection Board. And the European Data Protection Board, that central Brussels authority, is the one that issued this ruling just now. This is a repudiation of what the Irish regulator had said. And until now it has been something that American lawyers and academics can largely disregard this federalism questions within Europe about when national authority versus centralized Brussels authority is in charge of tech regulation questions.

But man, we sure are seeing our own version of that today with so many state legislatures in the US passing very specific tech regulation laws, speech laws like the ones in Texas and Florida. Or privacy laws or child protection laws. And we are having very analogous questions about when that's appropriate and when there should be a centralized federal regime. So part of what's playing out in Europe is effectively a federalism fight about when the central authority decides things and when it's an individual member state.

Evelyn Douek:

Okay. And so we should also note that Meta has said that "We strongly believe our approach respects the GDPR, and therefore, we're disappointed by these decisions and intend to appeal the substance of the rulings and the fines." So as with everything, the saga goes on, there is no quick resolution. So we'll be following that.

Okay, another story over the break was that Cash Hill reported that in response to her previous reporting that we've mentioned on this podcast before about Google disabling people's accounts when their technology flags child sexual abuse material in innocuous circumstances. In the original story it was from a father who was taking a picture of his son to send to his doctor, his nude child, and he had access to all of his accounts taken down. Google has now implemented a new appeals process that allows users to provide extra context for why they took that photo.

Again, in this Cash Hill story, a person had their account disabled when their nine year-old got hold of their phone and uploaded a YouTube short video of themselves dancing around naked. Which, good for them starting their career early. I'm sure they won't regret that at all. And so I think this is a really positive step forward to show how reporting can be useful. But there's no details here about how the appeals work, how it's staffed, who's going to be looking at it, how quickly people are going to get to these appeals. The devil's in the detail, but it's a good example I think of how process is also policy when you're thinking about these things.

Alex Stamos:

Yeah. And this is the kind of thing that allows you to do aggressive trust and safety work, is if you have an appeal program and such. I feel like there were a couple of different reactions to Cash Hill's story. Some people were just shocked that scanning happens of something that you're not sharing, but you are storing unencrypted in the cloud. Which was obvious to anybody who was paying attention, but not to as wide as the entire leadership of the New York Times. But then other people who thought I was in the camp of, a lot of my libertarian principles have fallen away based upon the child safety work that I've done, at least in the circumstance that the scanning itself is reasonable. But you do have to have both intelligent decisions of what gets referred as well as then an appeal process. The dancing one, that's interesting in that generally these companies are pretty smart about not reporting images that are just innocent images of kids naked in the bath, or something like that.

We don't have to talk about the standards here, but there are two classes of CSAM. There's the actual sexual contact, but there's also lascivious exhibition in the law. And defining what is lascivious turns out to be, as you can imagine, something for which there's a huge number of court cases. But generally, a kid just dancing around naked would not count for that. It's interesting that they announced the appeal process. I wonder if they're also looking a little bit at whether they're overreporting on pieces of content for which there is an innocent explanation.

Evelyn Douek:

Right. The introduction of the appeals process could be a great opportunity for Google to provide more transparency into its false positive raid, how its technology is working. The point of these stories is that this technology that is designed to detect new instances of child sexual abuse material never before seen, and so not added to a database of previously identified child sexual abuse material, has errors as any technology does. And so it would be great if Google wanted to provide some transparency into what it's seeing meant. This may be an opportunity that it brings to light endemic over notification and false positives.

Okay. Moving to WhatsApp, a fair bit of news about WhatsApp to cover. Three related stories that you flagged, Alex. The first was that it has announced a proxy server to prevent blocking by internet sensors in repressive regimes, which is a favorite and escalating topic by repressive regimes to just block entire platforms. And just today the Supreme Court said that it denied cert to the NSO Group, which was claiming foreign sovereign immunity from suit. So it is the suit that WhatsApp is bringing against it can continue. And then of course WhatsApp is extremely big in Latin America, and so has had a key role in the Brazilian protest, which we're going to come back to later in the episode. But curious for your thoughts, Alex, on this proxy server and what it means.

Alex Stamos:

Yeah. These three big stories demonstrate the difficult trade-offs that we have in this area. Which is, when you provide people freedom and privacy, they can also use that freedom and privacy to do things you disagree with. The proxy server's a big deal. As you said, countries block WhatsApp. But when we talk about blocking, often people are thinking about China, Iran, these big obvious blocks. But if you actually track this stuff, it turns out basically every week some government ends up blocking some service or a bunch of services based upon an election or something not going their way. It's famous in a couple of countries, they like to block all services during their national high school exams because they want to prevent cheating, and so they do that by blocking WhatsApp to the entire country. And so you'll have these kinds of things happen on a regular basis.

And so some of them are obviously much larger human rights violations than others. The interesting thing here is that what WhatsApp has done is they've created a system that allows you to choose your proxy that still is mostly secure. The fact that you are using a proxy can become obvious to somebody. There's always this difficult trade-off here at the companies in that a lot of the oppression that happens is offline. It is not just complex work by the Ministry of State Security using the great firewall. In a lot of countries like Northern Africa, for example, after the Arab Spring you have secret police just going to people and saying, "Show me your phone." And so this can't protect against that. And in fact, somebody says, "Show me your phone," they're going to be able to see that you're using WhatsApp via a proxy. So there are still risks.

But it does provide the ability for people to circumvent, and it allows activist groups and such to run their own proxies. So that's the other interesting thing here, is that instead of a centralized system, it is decentralized and it's going to be incumbent on individuals to try to find proxies they trust. And so, it will be interesting to see how this works as an experiment. This isn't that different than how Tor bridges work. But that the people who use tour are generally pretty technically sophisticated, whereas this is billions of people use WhatsApp of varying levels of sophistication. And so, it is definitely going to be possible to get yourself in trouble by opting into a proxy that is run by bad guys. They will not be able to see the content, but they can see that you're circumventing their block.

I think it's a good move, but it also needs to be an experiment. And there needs to be some really talking to human rights folks on the ground to see what the possible impacts are of the low tech oppression against this kind of stuff. And then the NSO Group, again, this is something I'm not neutral on. I helped try to push for that for years at Facebook, and they finally filed the suit after I left. And I think it's great. NSO Group has enabled lots and lots of horrible activity by oppressive governments and to hold them legally accountable for that when they used to be at least a wildly profitable company, I think it's appropriate.

Evelyn Douek:

Great. And we'll come back to the Brazilian protest story towards the end of the episode. In our recurring weekly Twitter segment. Musk was kindly slowed down the fire hose of news over the break. But one of the bigger stories was that Twitter announced, by a tweet of course, that it was intending to relax its policies around US advertising for political ads. Twitter had famously been one of the ones first out of the gate in banning all political advertising, but that's no longer going to be the case, and political ads are going to be allowed. I think this is a really interesting one. A lot of the discourse around political ads generally thinks, in the lead up to elections, you'll see a lot of people on the left thinking that these bans are really good, that the ads are a place for a lot of misinformation. Political ads, which are a source of revenue for the companies, they're basically only harmful.

I think it's a lot more complicated than that, and I think I'd point listeners to some research by Scott Brennan and Matt Perault at Duke University, that showed that one of the big effects of these bans, actually in the lead up to the 2020 elections, was that it had very little impact on misinformation, but it generally hurt poorer campaigns and more local campaigns. The people that had access to lots of funds and could run national campaigns, or place ads on TV and radio, were unaffected, but more local campaigns and challenges and not incumbents struggled a lot more. And so I think it's another good example of where the initial superficial story might not... The attractive one might not necessarily be the reality, and it's something to keep in mind.

Alex Stamos:

Yeah. There's always been this overwhelming focus on political ads that has never been backed up by empirical evidence as ads being especially harmful. I do believe that companies have more responsibility around advertising, in that they are taking money for the implication of speech, so they should have higher standards. But ever since the revelations in 2017 of all the 2016 election stuff, everybody focused on Russian ads. Everybody focused on, we have the stupid Netflix documentary that is completely ridiculous on the impact of ads and on ad targeting and such. There's been so much disinformation, including from a Harvard professor who wrote an entire book about this that is completely unempirical and not based upon any evidence. In fact, just today Nature Communications has a new paper on the impact of Russian activity in 2016, of both organic and ad based, that really downplays. I think you and I can cover this in more detail after we've had time to read it more, but there's always been all this focus on ads.

And the truth is that online ads are a massive democratic force. If you are a small NGO and you want to run ads on global warming, you do not have the money to go do a television ad and do the minimum buy of $50,000, $100,000. But you can do an online ad and run it for 100 bucks, 500 bucks, 1,000 bucks. And so if you get rid of online political ads, all you do is seed the field, the people who have money. ExxonMobil, BP, they can afford the slickly produced television ad. They can afford to have the direct advertising relationship with the newspapers and such. They don't need to have the programmatic ad networks. Twitter banning political ads was always them just dodging responsibility, trying to generate some short-term positive press for themselves, and it really had no positive impact. I think it only had a negative impact.

You're just seeding to people who have lots of money. I'm actually glad Twitter did this, and I'm glad that we're starting to return to some sanity around empirical evidence of how bad online advertising is. Because in the end, one, it's what pays for all the free internet. The fact that billions of people have access to Google and Facebook and Twitter and all these things for free is only because of advertising. If you charge three bucks a month, then that cuts off a lot of people from the world. And second, it actually does democratize the ability for smaller groups, less well-funded groups to reach large numbers of people. That doesn't mean they should have no policies.

The flip side here is, if Twitter just did it under normal circumstances, that'd be fine. At the same time that they're also dropping all of their rules, I do think there could be some negative impacts. Just because, if they're not going to enforce rules against foreign influence groups, if they're not going to enforce CIB rules, if they're not going to enforce even basic hate speech rules, then letting ads back in can make things marginally worse. Although, I think it'll be not as bad as anybody thinks.

Evelyn Douek:

I think another story this week that really highlights a lot of those dynamics, but in a different context, the Wall Street Journal had a great story on the Facebook Meta project to basically try and get out of politics, and how there's no escape. You can't run away from politics. With Facebook tired, and Mark Zuckerberg in particular, just tired of being beaten around the head about political content on Facebook, there was this project to really reduce the amount of political content that users saw in their Facebook feeds. And it turned out that was actually extremely difficult to do. That once you start doing that, you'll have more trustworthy established media organizations will take a real hit on their number of clicks and their visibility, and a lot of less authoritative sources will get lifted. And another part of collateral damage is a lot of local groups, local fundraisers, local civic organizations also get down ranked. Because it turns out that broad civic demotion or demoting anything political is really difficult to do.

It's not so simple as just flicking a switch on the problematic stuff. And one of the stats in the article says that there was a 10% decline in donations to charity over the period that Facebook was testing this feature. Yeah, I just think it's really interesting, concrete examples of how the trade-offs that are here. And I always get cautious when people are really celebrating telling these platforms to get out of politics. I don't know that a user base of Facebook users that never engage in high quality political content is the kind of world that I want to see.

Alex Stamos:

Everybody who says, "I want Facebook out of politics." They mean, I want Facebook not to promote the politics I disagree with that. You can always substitute that for that. And like you said, the personal's political. We've had several examples of this of Facebook tries to make one of these changes to duck some kind of controversy of the emergent behavior of users, and then you just have a different emergent behavior. You get rid of links to fake news sites and so you end up with people pushing QAnon theories with organic content without links where the content all exists on the platform itself and in private groups. So this widespread, trying to make people better, as you and I have talked about a number of times, I think that is where the impulse needs to be. What are we doing to actually make things worse and to reduce that? To improve people to try to, oh, we're going to reduce political content because the politics people have is bad, I think ends up is always going to have really negative downstream consequences.

Evelyn Douek:

Speaking of this Facebook and politics, its decision on whether to reinstate Trump's account is coming in the next few weeks. We are now at its self-imposed two-year deadline on when it will give a new decision. I don't know exactly when it will drop. I imagine they are waiting for some particularly egregious Twitter files to drop, and then they'll just pop out a little blog post in the wake of some Musk controversy.

Alex Stamos:

Yeah. I expected it to happen on Christmas day. I feel like Facebook comms is really losing their touch.

Evelyn Douek:

Yeah, exactly.

Alex Stamos:

It's interesting, because Trump has been reinstated on Twitter and it meant nothing. I think the fact that Trump basically did not come back to Twitter does make it easier for Facebook. The question is, are the things different on Facebook? I think a lot of this depends on how serious Trump's campaign for president is. If he ends up running a serious campaign, then at least fundraising and such will be happening on Facebook. He wants to use Gettr as his outlet for him to make news. [inaudible 00:22:01] Facebook was never the place where he would... I'm sorry, not Gettr. I'm sorry, Truth Social. Facebook was never the place where Trump was making news. It was the place that was used by his professional staff for his videos and content, and especially for advertising. I think of all of the parts, the access to the Facebook advertising will be the thing that will be most interesting, since Twitter advertising's never been very good, and it seems to be completely falling apart now.

Evelyn Douek:

Yeah, it is depressing that Trump is showing more self-restraint in use of his Twitter account than I am. But there you go. Okay. And last piece of Meta news is just this morning the oversight board released a new decision to do with the Iranian protests. It overturned Meta's original decision to remove a Facebook post that contained the slogan, [foreign language 00:22:48] Khamenei, literally meaning "Death to Khamenei." But commonly associated with protests there, "Down with Khamenei" is the more general translation. And I think one of the things that I'd say is for people that haven't looked at an oversight board decision in a while, it's worth going and having a look. I actually think it's a pretty impressive decision in many ways that reflects a lot of the complexities of doing content moderation at scale, and this is not a situation where Meta comes out looking ridiculous with lots of egg on its face.

I think Meta actually comes out looking pretty thoughtful. It talks about how it often implements allowances for this kind of rhetoric in moments of political protest, but it hadn't done so in this particular case. And the board said, this shouldn't be part of a special allowance, given the context, given that this is clearly not a threat where it's obvious that this is going to result in actual harm, then it should be allowed. Meta had applied a newsworthiness policy, and so it had reinstated the post, but the board was saying that was not enough. I think it's generally a good decision, but it raises all sorts of issues. The board says this is not a universal rule. The same quote with the word Salman Rushdie imposed would be a different context, where there might be an actual likelihood or possibility of harm occurring.

And it also talked about the lead up to the January 6th riots in DC, where it said death to politicians in that context, like "Hang Mike Pence" for example, that might be a different context because of actual threat to politicians' lives. I'm not sure how convincing I find that. In that hindsight is 20/20, we can see that there was actual threat to politicians' lives in retrospect. But I don't know at the time in the lead up to those events that anyone would've thought necessarily how different the threat was. But I don't know, I think these are incredibly hard, difficult decisions. But I think overall, the discussion is much more nuanced than is normally seen in reporting and discussion of things like threats and political rhetoric online conversation. So it's worth having a look.

Alex Stamos:

Yeah, I agree. It's a hard one. Again, this is one of those things that depending on where you are, death to blank is going to read very differently depending on whether you agree or not that person deserves to die. And it is nice to see this kind of thing, because it is nice to have neutral speech experts look at this from both a human rights as well as a realistic incitement to violence standard. But yeah, it's a hard one. I don't think there's any one right answer here. And so having a bunch of people argue over it like this and try to come up with a standard is probably the best you could do.

Evelyn Douek:

Okay. Speaking of January 6th, worth flagging a host in job security and tech policy press this week from investigators and analysts that worked for the January 6th committee, specifically on the role of social media. In the lead up to January 6th, it was quite a nuanced post. They said that most of this stuff did not appear in the final report because of the decisions that the report authors made around focusing on the role of the president, but they wanted to make clear that they didn't believe that meant that the role of social media platforms should be ignored. And that they documented a lot of this rhetoric in the lead up to January 6th happening on these platforms, including and especially on far right platforms and not just the mainstream platforms. One of the things that they say is that they found no evidence that these companies refused to take content down on the basis that they thought it was a good business decision.

It was not that this content was highly engaging, and therefore, let's keep it up for the clicks. It was more concerns about political reasons and political ramifications, and that taking their foot off the gas in the wake of the election when they'd done pretty well up until the election caused the gap for this kind of content to get through. Again, it's interesting. It is in complete contradiction to the Twitter files, where they say that there is no evidence that there was anti-conservative bias. If anything, it was completely the opposite, but it's worth having a look. The post also says lawmakers are far too focused on the role of algorithms and regulating algorithms. Their assessment is that you could get rid of algorithms tomorrow and these platforms would still be vast libraries of extremist content that is searchable and can be found by anyone looking for it.

And so, the first thing to do is to work on transparency and go from there. I thought it was a nuanced post. I thought it was interesting and worth having a look at. And in the big story this week, we're still obviously waiting for more empirical analysis, things are developing. But obviously, the protests in Brazil, or the riots in Brazil, which we were just talking about January 6th. And since this is being called Brazil's January 6th moment. Alex, you called it early in terms of Twitter's role in Brazil being a really important thing to watch in the wake of the Musk takeover, given Tesla's role and things in there. I'm curious for what you are seeing in these early days about what happened across the internet in Brazil.

Alex Stamos:

Yeah. If you're looking for a description, some examples of what's going on, my colleague David Teal posted on Mastodon, we'll put that in the show notes, so folks can see that. As you can imagine, most just like everything else, all the communications here is being intermediated by big American companies. No matter what was going to happen, big American companies were going to have a role in what is happening in Brazil. Clearly, the base responsibility here is on Bolsonaro and his supporters, who have been questioning the outcome ever since he lost. Just like with Trump, he had pre*indicated effectively that he thought the election was going to be stolen, and tried to prime his followers into the belief it was stolen. He is currently in the US. This is not an extradition podcast, but it'll be interesting to see the politics of how if we have a grown Elián González situation here of Florida trying to protect him while the federal government ends up extraditing him to Brazil.

There's a bunch of crazy stuff that might happen legally with Brazil going from here on out. But for the online content moderation, one of the things David points out is effectively the companies have done very little compared to what they were doing around January 6th. Whether you agree it was enough or not, just quantitatively the work that's been done around this is less. Now, partly I think there's three things going on here. One, on Twitter they've completely decimated any work in this space, there's nobody doing any of this work anymore. And without a team looking at coordinated groups, it is quite possible you have pro Bolsonaro groups running hundreds or thousands of fake Twitter accounts. You would never know now because there's nobody at Twitter minding the store in that area. And so Twitter has changed.

Facebook and YouTube and such, just obviously our paying much less attention to Brazil, have fewer people working in Portuguese, have much less aggressive standards. The third big one is that the most important platform in Brazil is WhatsApp. And so this goes back to the discussion, the hardest trade off in all of trust and safety is safety versus privacy. WhatsApp is fully encrypted, there's no mechanism for WhatsApp to do proactive content moderation. You can only have people report stuff and then actions be taken. And the things that they allow actions to be taken on is actually quite limited. One of the things that has happened is that since... Quietly, one of the things WhatsApp did was it limited the size of groups in Brazil. And they have lifted that limit, they allow these mega groups. They have these new very large end encrypted communities. It competes against Telegram specifically, but Telegram is not really end-to-end encrypted and WhatsApp is. And so as a result, you end up with these very large groups.

And so there is, I think, going to be an open question of, how much did those very large groups drive this? In the end, then the question is, do you get rid of that entire feature just because it can be used in a poor way? And then right back to what we talked about up top, WhatsApp can be used for things you and I agree with, like women trying to get their rights in Iran. It can also be used to try to overthrow a democratically elected government in Brazil. And so, I think probably the use of WhatsApp in something like this is just going to be something we're going to have to live with if we want people to be able to have private communication.

The downside of that is that bad people will have private communication too. And in the end, platforms are important, but they're not determinative in this. This is being driven by the political situation in Brazil. The people who are going out there and saying that the election was stolen and telling people to go take over these, they are the ones who should be held criminally responsible, just as I think they should in the United States.

Evelyn Douek:

That's really interesting. I hadn't known that about WhatsApp increasing the group size. You're right, that is quite... And a quick Google of it shows that they've actually been doing that steadily over the last year. Which is really interesting, because it just shows how a lot of content moderation is platform architecture and affordances. One of the things that WhatsApp has always boasted when it got into trouble in India around rumors spreading and things like that, was that it introduced more friction and reduced forward limits and things like that. And so it's been more quiet and less transparent about these ones, or less extolling those changes.

I'm going to brutally steal from David here in his Mastodon post, which we will definitely link to. He summarizes platform responses were, in short: Twitter, we purposely fired everyone who deals with this. Facebook, here's a link to the electoral justice homepage. Telegram, we're pretending it's all encrypted. Gettr and the alts, even if we knew how, we wouldn't do anything. And then just saying that WhatsApp and TikTok had their own issues. It's a mess, and it's a shame, because this was foreseeable. And I hope it gets a similar level of attention as all of the January 6th stuff did. Just because it happens overseas doesn't mean it shouldn't.

Alex Stamos:

Yeah. And the interesting thing, when we go back to Twitter, is Brazil is a very important country for Musk. As you and I talked about, there's pictures of him shaking Bolsonaro's hand, coming up with deals to... Brazil mines, a lot of the materials that go into Tesla's batteries. Brazil's a huge market, obviously it's a very important developing economy. And he wants to have a good relationship with Brazil no matter what. And so I think this is the first real huge geopolitical moment for him in that he was just asking questions, doing his thing where he interacts with people from the alt-right. And folks who were saying it was stolen from Bolsonaro, he expressed interest a little bit in those things. Which, Musk expressing interest in something drives a huge amount of just him replying and saying, "I'll look into it", or "That's interesting."

Ends up driving a lot of discussion, because all of his fanboys end up now believing that thing is absolutely true. And then now he is saying, "I hope things resolve peacefully in Brazil." Clearly, his people are telling him, Lula is going to maintain control Brazil. He's going to have to get along with the Brazilian government. And so he is caught now between the MAGA fanboys that he has been re-platforming, and the fact that he needs to maintain a reasonable working relationship with Brazil for the benefit of Tesla and it's continuously dropping stock price. The fascinating thing is he just re-platformed Ali Alexander, who's one of the organizers of January 6th. At the same time, he is now saying he hopes things resolve peacefully. Anyway. Once again, it demonstrates the difficulty of him having all of these different jobs and all of his money tied up in a company that actually physically makes things and sells it around the world.

Evelyn Douek:

And just because we were talking about Trump's account, it's also worth noting, Bolsonaro still has his Facebook page and his Twitter account. They're both still active. It shows that Trump's real sin was being a bit of an edge lord. Because Bolsonaro is just tweeting and posting very inane stuff about how great his term was, rather than anything inflammatory or problematic as Trump did. And there's a real question for social media companies about what they should do in these contexts around, do they just focus on what's happening on the platform and how it's being interpreted, or do they look at surrounding context in terms of when they remove accounts? And from my mind, it becomes very difficult and dangerous if platforms just start being a judge of what's going on in the world much more broadly than their own platforms. But it's something to keep an eye on.

Alex Stamos:

Yep. Totally agree.

Evelyn Douek:

Okay. And, did you have anything else that you wanted to cover before we closed out today?

Alex Stamos:

No. Happy new year, everybody. Looking forward to a wonderful 2023. You teaching yet?

Evelyn Douek:

No. This is the best job in the world. I don't teach until April, so I'm just thinking really hard. I swear, I'm thinking super hard, intelligent, sophisticated thoughts. That's what I'm doing.

Alex Stamos:

I can feel that just flowing through the podcast right now, all of your thought. Yeah. I don't have to teach until April either, so it is nice to get a little break, but it's going to be a big one. And things are really interesting on campus at Stanford right now. We could turn this whole podcast into a Stanford controversy podcast too, although that might not be good since neither one of us is tenure.

Evelyn Douek:

Well, yeah, exactly. For the slow week in trust and safety news, maybe we'll cover that. I'm sure there will be a content moderation and trust and safety angle to all of the Stanford stuff inevitably. And with that, that has been your Moderated Content weekly update. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn't have been possible without the research and editorial assistance of John Perrino, policy analyst at the Stanford Internet Observatory. And it is produced by Brian Pelletier. Special thanks also to Alyssa Ashdown, Justin Fu, and Ryan Roberts. Here's to 2023.