Moderated Content

MC Weekly Update 12/26: The Show Must Go On

Episode Summary

Alex and Evelyn discuss CSAM in machine learning datasets, big DSA news from Europe, Meta's moderation around Israel & Hamas, Substack's Nazi problem, and another entry in the Netchoice Restatement of the Law.

Episode Notes


 

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Evelyn Dueck:

The cover art is an image of the two of us that was taken in a studio and has been run through staple diffusion with various prompts for different styles. I get a good chuckle out of it every week, and I hope some listeners do if others are just confused.

Alex Stamos:

I feel like it's run its course. This is something we should ask our listeners. What kind of AI-generated cover art do you expect for 2024? That could be our first roundup is, I'd love to do something different than use that photo. Maybe it's a new photo of us. Technology's gone a lot better since we started this podcast, so we could try without prompting with a photo to try to create images that include the two of us and see if we can train a model to know what we look like.

Evelyn Dueck:

Yeah. The best podcast host ever, and it just generates an image of the two of us. Hello and welcome to Moderated Content's stochastically-released, slightly random and not at all comprehensive news update from the world of Trust and Safety of myself, Evelyn Dueck and Alex Stamos. Alex, I love your commitment to the show for you doing this episode today. We promised the listeners one more episode before the year went out and come hell or high water you are delivering, so thank you.

Alex Stamos:

Yeah, well, it might be, it's possible I'm on the tail end of an upper respiratory infection, but yes, nothing will keep me from the mic, including my inability to speak, yes.

Evelyn Dueck:

Priorities, we love it. Okay. A nice slate of stories here, I think, that highlight some of the themes of the year and things that we can be looking out for going into the new year as well. Starting with a big report that came out of the Stanford Internet Observatory in the last couple of weeks by your colleague, David Thiel, talking about machine learning data sets. Do you want to give us an overview of what David found and what's happened as a result?

Alex Stamos:

Yeah. David continued his work on the possibility of using AI to do a number of abusive things, but specifically around child safety. He did a report with Thorn earlier this year talking about the communities that are using open source tools and then modifying them to create realistic CSAM and the kind of legal issues that that's going to cause the possibility of being flooded of millions and millions of fake images that don't have real victims behind them. This is a follow-on kind of answering the question of, huh, why does Stable Diffusion 1.5 have the ability to create naked children? There's a number of issues here around AI systems being able to know what nudity is, know what children is, and then figure out the intersection of that, but it also turns out that the most important open source sets of images that are used to train these models have CSAM in them.

He, in particular, looked at LAION-5B, 5B for 5 billion. Now, the dataset itself doesn't actually have the images. What you do is you download from hugging face or one of the places that used to host this repository metadata that has some mathematical representations of all the images. It has descriptions and labels and it has links to all the images. If you want to build a Stable Diffusion or an equivalent kind of diffusion model or any kind of AI model, you go and you fetch all that stuff and you make a copy. If you grab all the images together, it's in hundreds of terabytes, so like David says, "These days that fits in a backpack." It is not like what you would normally do on your home PC, but certainly, grabbing all of those images and holding onto them is something that a number of people have done, and there's probably hundreds and hundreds of copies of this entire dataset in AWS, for example, and in Azure for all the people who are using it privately.

Well, so what David did was instead of trying to fetch every single image and check it for CSAM, which is just not computationally possible, he utilized the, what are called the embedding. He utilized the mathematical representation that is used for the models himself and then did a really cute trick called KNN, where he was able to look at the nearest neighbors, at the neighbors of images that were known CSAM, and by doing that, he was able to find thousands of images that are known child sexual abuse material. How is he sure about this? Well, when he found these, he sent them to the C3P, the Canadian Center that is the equivalent of NCMEC, and then they had their analyst look and verify.

This was a big deal. His report is out, we can link to it. 404 media, I think, did the most in-depth coverage of his work. One of the outcomes here was that that dataset was removed. In the long run, that's not really his goal. The goal here is that not using CSAM in these big data sets should be a goal, and he comes up with some suggestions of what you should do in the future if you're the kind of people who are building these models? If you're building these data sets for folks to use, what kind of verifications should do so that you're not sneaking that in to all these people's models? The downside is, is that we now have hundreds of models out there that have been trained on this dataset all of which now have some knowledge of this CSAM in it and the legal and cultural ramifications that we're going to be living with for a while.

Evelyn Dueck:

Yeah, so this report caused quite a stir. It made a lot of impact, which is great to see, got a lot of coverage, which is fantastic. You are sounding a little bit like an enemy of progress though, Alex, when I hear you talking about this, because it's hard even to make this argument or ask these questions, but I am going to ask these questions. When we're talking about a dataset of 5 billion images and we're talking about a couple of thousand kits of known or suspected CSAM, what's the damage here? How should we think about this in terms of the harm that this corruption of the dataset causes?

Alex Stamos:

It's hard to say what impact these exact images have on the ability for this model to be used abusively. I'm sorry, models trained on this dataset to be used abusively. I think this is just one of multiple things that David's been working on that I think you just have to think big picture, which is if you are going to build an open source image generation AI model, you have to really think about the CSAM problem, and part of that should be your training set. Part of it is you should red team your model before you release it, which is what? Did not happen with these models from stability, and to make sure it can't be used. If you find this is useful, created CSAM, then my suggestion would be that you find a way to reduce that capability and there's effectively like terms some people use dismissively as to lobotomize these AI models.

That is a little dramatic because if a human's lobotomized, they're messed up in all these ways, but what you can effectively do is you could try to effectively retrain models of remove the pathways in them that allowed them to do these kinds of things, which probably would include some things they're trained on, but also the inferences they're able to make if they understand, like I said, what a child looks like, what a nude person looks like, and to combine those things together. That's one thing David talks about here.

Yes, the actual impact of the images themselves probably not that great. The impact of stability of Stable Diffusion 1.5 has been massively negative. I'm just going to say it. Yes, people have made fun stuff, including our cover art was built on Stable Diffusion, but the human impact has been really, really bad for, not just in this case of children, but lots of individuals who have been targeted with non-consensual porn that has been generated of them. I think this needs to be a basic moral expectation of, if you're going to build something with this much power that you can have it do fun stuff without it having the ability to do things like create completely photorealistic images of children in sexual situations.

Evelyn Dueck:

Thank you for mentioning the cover art, actually, because I had a listener write in and ask, "Hey, what's with the cover art?" It's amazing that you are suggesting that this thing that apparently is not self-explanatory to the people that see it doesn't outweigh the harm that's been done by Stable Diffusion. But anyway, yes, the cover art is an image of the two of us that was taken in a studio and has been run through Stable Diffusion with various prompts for different styles. I get a good chuckle out of it every week, and I hope some listeners do if others are just confused.

Alex Stamos:

I feel like it's run its course. This is something we should ask our listeners. What kind of AI-generated cover art do you expect for 2024? That can be our first roundup is, I'd love to do something different than use that photo. Maybe it's a new photo of us. Technology's got a lot better since we started this podcast, so we could try without prompting with a photo to try to create images that include the two of us and see if we could train a model to know what we look like.

Evelyn Dueck:

Yeah, the best podcast host ever, and it just generates an image of the two of us and maybe that's-

Alex Stamos:

Right, let me make that work.

Evelyn Dueck:

... that'll work. The link to the Enemies of Progress story, I think, is a good one because when we talked about Enemies of Progress last time, we talked about Civitai a little bit and there's been some developments in that story. Do you want to talk to us a little bit about that as well?

Alex Stamos:

Are you asking for a Enemies of Progress update?

Evelyn Dueck:

I am, and your sickness is really helping you nail that voice. It's excellent.

Alex Stamos:

Excellent, oh yes. Okay, yes. Yeah, the advantage, yes. One of the platforms that is used to host a lot of not safe for work AI-generated content is a platform called Civitai, which received a significant funding run, I believe $5 million from Andreessen Horowitz. Just a couple of weeks ago, they were actually disconnected by OctoML, which is a machine learning accelerator platform because OctoML was unhappy with the amount of non-consensual pornography that was being created using Civitai. There are examples, if you look carefully, I'm not going to give people pointers. There are examples of stuff involving children up there. Now, that is against their policies and they do try to, seems to enforce those. When we've contacted them, they've been reactive to that. But a much, much larger set is non-consensual porn that includes the faces of almost always women, of real women who do not want to have images of themselves involved in pornographic situations.

That seems to be something that really infects all over Civitai and that it is a core part of that community of people along with Discord, to be frank, and that is what got them in trouble there. Obviously, OctoML, more enemies of progress and that they don't want their platform to be used to make non-consensual porn, and the founder of Civitai, Justin Mayer, said people that are there to make these not safe for work things are creating and pushing for these models in ways that transcend that use case. It's been valuable to have the community even if they're making things I'm not interested in or that I prefer not to have on the site. He kind of punts on any kind of responsibility here.

Yeah, this is going to be our new reality. We've been dealing with non-consensual nudes that have been stolen or forwarded or utilized outside of the situations in which people initially took them. I think that is already being eclipsed by the ability to generate the stuff. People don't have to make that initial mistake of sending some dumb guy your nudes. All you need is one photo on Instagram or a couple of photos on Instagram, and they're able to create completely realistic porn involving people. It's a real terrible kind of victimization I think we're going to see a ton of.

Evelyn Dueck:

Yeah. I think when people talk about the harms of Generative AI, we focus a lot on the deepfakes of Joe Biden or something along those lines swinging an election. Nobody is going to be fooled by a deepfake of Joe Biden where there is a literal second by second trace of where he is and what he says all the time. The real harms of this stuff are to the high school girls who are posting stories on Instagram and then suddenly finding themselves in non-consensual pornography-

Alex Stamos:

Oh, absolutely.

Evelyn Dueck:

... on the schoolyard.

Alex Stamos:

Yeah, right. If there's a deepfake of Joe Biden, every single news article, every news outlet is looking into it. Even Fox is going to have to run, "Oh, we're not going to run this because it's fake." But yeah, Taylor from a high school in Milwaukee does not get that kind of support.

Evelyn Dueck:

Right, absolutely. Okay. Over to Europe.

Recording:

[foreign language 00:11:51].

Evelyn Dueck:

It has been a big couple of weeks for the DSA. Our EU commissioners are delivering us some holiday presents. The first was the designation of three new VLOPs or very large online platforms, where the adult sites are now joining the clubs. PornHub, Stripchat and Xvideos have been designated as VLOPs. Now, a little bit of a backstory here that I didn't know about but I learned about when I was reading this is apparently, these sites all self-reported much lower user numbers than would've put them over the line of 45 million active monthly users to be designated VLOPs. I think PornHub, for example, said it had 33 million European active monthly users, and it turns out that the commission didn't find that a believable statistic and has now designated them VLOPs, which comes with all of these additional obligations. There's been discussion around whether that's going to require them to implement age verification.

The DSA, the Digital Services Act, doesn't say anything specifically or explicitly about age verification, but there are requirements that these sites do risk assessments to manage risk especially to minors, and so it's going to be interesting to see how these sites now deal with these additional obligations under the DSA. But it's amazing that we haven't really thought about this or talked about this much before, because of course, porn sites are user-generated content platforms and they have all of these content moderation problems like the ones that we've just been talking about, and so it makes total sense that they should all be in these, designated these platforms and have to deal with these imposed obligations just as much as X or Instagram or something along those lines. That makes sense. A good step forward, will be interesting to see how they comply with that.

Alex Stamos:

Gosh! They keep on missing the European companies. It is just amazing. They made that mistake the first time, right? Zalando, they had one European company in their list of VLOPs, who immediately appealed, and the last I check according for what I can find, that appeal is still happening of them arguing of like, "This law shouldn't apply to Europeans, it should only apply to Americans." But just amazing how they keep on expanding this list. Gosh! It's either a testament to how incredibly good they were at drawing their lines that they could never possibly accept this one accidental case catch one German company, and it's also a really sad statement about the European internet industry. When you're at the Stripchat level, that there's no European companies above Stripchat on the list of influence online, that's bad. I'm sorry, Europe, but things are not going so well. This regulatory structure that you're taking does not seem to be working out so hot if you're at the Stripchat level and you still have only caught one German retailer as the only company that counts from Europe.

Evelyn Dueck:

Yeah. Look. I think the fact that, oh, and also these guys that the adult sites get lumped in six months after the high profile social media platforms says something in and of itself about the level of forethought and regulatory design that has gone into this enforcement structure.

Alex Stamos:

The whole shtick is about the dissemination of possibly harmful content. It's like, "Hmm, I wonder if PornHub has any possible challenges in this area." Yeah, famously.

Evelyn Dueck:

Yeah. Okay. But continuing with the theme of the commission announcing high profile and sexy headline worthy stories, in the past couple of weeks, the commission has also announced a formal investigation into X, the platform formally known as Twitter, for its compliance with the Digital Services Act. There's a bunch of suspicions that it has alleged in its press release around this, around whether X is complying with its obligations around dissemination of illegal content. There's a reference to the effectiveness or otherwise of community notes. A reference to the idea that deceptive design in relation to blue checks and what they communicate to users and if they're being properly assigned and things like that, and other things like failure to give researchers access to proper data access and keep ad archives things like that.

Some of this seems reasonable. It does seem like X is not probably full faith complying with its obligations under the DSA, but it's also, I'm going to want a lot more information here about why they've singled out this platform, what it is that they're seeing in particular that has led them to bring this enforcement action or investigation at this point.

Alex Stamos:

You and I predicted this, right? Which is that one of the obvious flaws from the European perspective on GDPR is they did not immediately beat up a big American company in year one. They did not get that big spectacular win in the first year or two of enforcement, and that clearly, they would be looking for that for the DSA and Twitter/X has just made it too easy, right?

Evelyn Dueck:

Right.

Alex Stamos:

Whereas, I'm sure they could come up with complaints about TikTok or Meta or YouTube, although perhaps the magical YouTube dust, pixie dust protects them also from Europe.

Evelyn Dueck:

Also works in Europe, yeah.

Alex Stamos:

Yeah, who knows? But those companies would at least have arguments in these areas and they also have the staff to deal with it, and I think that's one of the challenges you hear from Twitter is it's like if you get rid of all of your people who do this work, those are the folks who help your lawyers prepare. Even if you're willing to go hire the humongous outside counsel costs that lawyers themselves can't do any of the work without talking to lots and lots of people who have the data and who can back it up and such. The fact that they've cut to the bone inside of X, inside the trust and safety teams, makes them really open to this kind of attack.

Yeah, it is unfortunate because it is pretty clearly politically-motivated, and Breton, especially, he continues to single-handedly ruin the reputation of the entire DSA regime with the way he talks about this. If you're going to handle this, this should be something that's soberly announced like we're doing an investigation because it's our job not because we're excited about it, and that's not his out persona on this, right?

Evelyn Dueck:

Right.

Alex Stamos:

He's the guy on the nuclear bomb riding it down, woo-hoo, riding it down into X headquarters. You don't get the feeling that this is a fairly applied law from the way that all of the discussion has been about it.

Evelyn Dueck:

Yeah, right. Absolutely. On the one hand, you do have this major platform basically giving two middle fingers to every regulator and all its content moderation obligations around the world in a sense, and so when you are announcing a flagship regulatory project about bringing platforms to heal around content moderation, it's kind of difficult to not do something when this platform's taking that approach. On the other hand, I think you're absolutely right that there needs to be some standard developer. I hope that there's transparency around what it actually is, what are the obligations that they are failing to comply with and how have they fallen short?

I will say, one, I saw a paper posted this week that was good and I wanted to draw attention to it. It's called the DSA Transparency Database Auditing Self-Reported Moderation Actions by Social Media, because one of my favorite stories this year has been the creation of this DSA transparency database where platforms have to give all of their statements of reasons for content moderation actions to the transparency database, the VLOPs, and apparently that database, in the short time that it's been running, is about to exceed 1 billion entries.

This paper from a group of Italian researchers, which I'll link to in the show notes, did an audit of it and checked how platforms are complying, and it found a few discrepancies and anomalies that suggest that compliance is pretty haphazard. Regarding X specifically, it found that none of the statement of reasons submitted by X specified that they used automated content moderation technology whatsoever, but that all of them said that they had managed to address their content moderation in zero days. X is telling this database that it's not using any automated technology, but also it's doing it extremely quickly, which basically seems extremely implausible and not at all correct and shows the usefulness of a database is only going to be as useful as the data that gets put into it, and at this stage, that's looking pretty shoddy.

Okay. Contrast the DSA approach to content moderation and the idea that these platforms need to be doing more, taking more content down with the discourse around Meta in the last few weeks to do with its content moderation in Israel and Gaza, now, defying our pessimistic predictions, the last episode we said that the Meta oversight board, we weren't predicting that it would release its expedited decisions regarding the Israel-Hamas conflict very quickly. We talked about them last week. They announced taking the cases on December 7.

Well, actually, they did announce that they were handing down the decisions on December 19, so about a week ago now, so that means one of two things happened. Either I think they waited up until they were close to being able to release the decisions and then announced that they'd taken the cases, so they could look like they were releasing speedy decisions. Or, they can release cases in 12 days, which is significantly shorter than the usual 90-plus days that it takes them to release a deadline. One of the two things is happening, neither of them is good. I hope it's the latter and they just start releasing decisions more quickly.

Alex Stamos:

I'm sure, yes. This is it. This is the start of all a sudden these people or this is all they do. The fact that these people all have really big jobs means that they're really big names. But unlike, say the Supreme Court, which still takes forever to do anything, they're law professors and they're advocates and they're working attorneys and all that stuff.

Evelyn Dueck:

Right, yeah. Before we give them any prizes for speed, let's remember these cases were about two pieces of content that were basically directly in the aftermath of October 7. They were releasing decisions on December 19 about content that had been posted some two months earlier, so still not the most expeditious of reasoning. Reading them, the decisions themselves, they're pretty disappointing, I have to say. Both decisions agree that Meta erred in its original decision to take down content and agree that the content should be put back up with a warning screen. We talked about this last time. One of them is an image of an Israeli hostage being taken away, and one of them is graphic imagery of the aftermath of a bombing on Al-Shiva hospital. In both cases, the board said that they erred on the side of this is important content, it's adding to people's understanding of the conflict.

They erred on the side of more speech, which again, let's contrast that with the approach that the European Union has been taking in writing these letters to these platforms and saying, "You need to be taking more stuff down. We are concerned about the level of graphic or illegal content on your platforms." That's clearly the underlying theme of these letters that these platforms are getting. On the other hand, the oversight board is saying, "No, no, you need to respect more speech and leave this content up."

But the board didn't engage in any of this kind of analysis. It didn't engage in any of the kinds of trade-offs arounds the costs of having this content on its site or the error rates or the reasons why these mistakes might've been made. I am left reading these two expedited decisions and feeling like in this incredibly high stakes moment where Meta is mediating our understanding of one of the most politicized conflicts that is playing out on social media, the board added absolutely nothing to my understanding of what's going on or the ways in which the challenges the platforms are facing or the ways in which they should approach them, and I think that's pretty disappointing given everything that brought us to this point.

Alex Stamos:

Yeah. We keep on saying this, but this is what the board is for. It's exactly this kind of situation that you would hope that they would be at their best. They've had such these stupid ticky talk cases of people being taken down and soccer stars and crap like that, and it's like, "Oh, about conflict where people are really dying and where the propaganda war is a huge part of the conflict," that's the place where you would hope they would have a real good way of setting a moral light for this is how we should handle this incredibly difficult situation because there's no right answer, as I'm sure we're about to talk about. There's no right answer here of how do you adjudicate speech in this kind of conflict. For them to say, "Well, here's at least a model that we think is internally consistent," would be nice here.

Evelyn Dueck:

Yeah. Again, this idea of just dealing with two different pieces of content and not really thinking about the systemic issues was, I think, problematic. Contrast that with another report that came out in the last couple of weeks that was trying to look at a much more systemic approach. This is a human rights watch report that has come out about the suppression of Palestinian content. It's got some headlines around its argument that there has been systemic online censorship of Palestinian voices, and this is another example of where the methodology of outside research groups when they're doing this kind of work is necessarily extremely limited. Do you want to talk a little bit about your qualms with this report, Alex?

Alex Stamos:

Yeah. Just like we've seen from Center for Countering Digital Hate like we've seen from the ADL, this is a advocacy group that is doing what they claim is independent research in support of their political position. Human Rights Watch is a great org in lots of ways. I know a number of people there, I think they're great. I disagree with them on some of the specifics here. I really disagree with the idea that this is some kind of real research. This is anecdata. They put out a call of, "If anybody has examples of censorship that supports our priors on this topic, please send it to us." Shockingly, they were able to get data that supports their priors. The mechanism of gathering up from people who are your Twitter followers, your Instagram followers, who already are people who follow you, and then specifically asking for, "Please give us examples of pro-Palestinian speech being censored," is that's what you're going to get.

It doesn't mean what they're doing is wrong, but it should not be, "This is a systemic study of this issue." This should be like, "Here are examples that we don't like," and I think this would be much stronger if they went into some of these examples that they say, "This is why we don't think this isn't appropriate," and not try to make these big sweeping claims that are not supported by their methodology. Again, I think advocacy groups should advocate, advocacy groups should use data to advocate, but they should not overstate what that data says, and they should not overstate their methodology, because that ends up hurting any of the real research in this area that uses scientific sampling, that uses ways to get rid of bias, that uses blinding with reviewers and coders and such, and none of that happened here. Whatever you think about what they said, and we can talk about the actual substance, from a methodological perspective, this thing's a disaster.

Evelyn Dueck:

Yeah, and look, I think I have to agree with that. The idea that you put a post out and say, "Give us examples of this," and then you get examples of that, that's not a great methodology. It happens to be the methodology that's used for a lot of different issues in society, but it doesn't hold up. Hilariously and unfortunately for Meta, one of my favorite things from this report is that the post that Human Rights Watch posts itself calling for example, so this content was actually flagged by the platform as spam and its visibility was reduced for a while. They talk about 1,050 mistakes that they catalog in this report and in the context of this conflict, 1,050 is a rounding error if that, it's not-

Alex Stamos:

This is the number one interesting topic on all social media right now, globally. Obviously, any soccer match or something, but of things that are actually relevant from a content moderation perspective, this is the number one thing, so a thousand examples, like you said, are nothing.

Evelyn Dueck:

Right. On the other hand, this is what we're reduced to in a world where we don't have good transparency from platforms about what's going on. The point that Human Rights Watch is making in this report that is true is that this builds on years and years and years of such suspicions and allegations from civil society organizations, where pro-Palestinian voices have been repressed on social media. Meta itself commissioned a human rights audit impact assessment of its practices in the region, and that report itself did find that not through intention but unintentional bias in the way that Meta systems were set up, in the way that its allocation of resources that there was bias against Arabic language content in the region and things like that.

It's not coming out of the blue, it's not only these 1,050 examples, and there isn't a lot of other ways in which advocates can make these points, but I think you're totally right, that to present it, I don't know how much they did, but to suggest that this is evidence of some systemic problem is going too far, but I do think it does make a prima facie case that Meta needs to respond to or look into why this is occurring in the region or give us some transparency. Like you said, it's such a high stakes issue and it's a number one issue on social media at the moment.

Alex Stamos:

Well, okay, but you and I have talked about this, which is Facebook has effectively picked a side in this conflict. That is what Human Rights Watch is really angry about, but they can't say it because they can't politically bring themselves to say what they actually believe, which is Facebook has decided, because this is a conflict between two governments. Now, there's lots of arguments over how legitimate Hamas' rulers of Gaza and how long it's been since election and such, and I've heard some interesting ... there's an interesting podcast, Ezra Klein had some interesting data from pollsters who are still trying to poll in the middle of a conflict, which is obviously not great on, how much does Hamas actually represent the people of Gaza? But no matter what, they are the ruling party, just like any authoritarian state, they're the ruling party and you have a conflict between them and the state of Israel.

But Facebook does not treat those two states equally as they might, at least prima facie between Ukraine and Russia, and that Hamas is a designated terrorist organization in the United States, and it is also on Facebook's DOI list, their Dangerous Organizations and Individuals list, and so Facebook says explicitly, "We do not allow you to celebrate these people, to promote them." They do not say that about the state of Israel or the Israeli Defense Forces. There is enforcement of pro-Israeli speech, but it has to be very explicit, where is pro-Palestinian speech, if in any way it is supporting Hamas' attacks or it is hoping that Hamas wins, then that is going to be seen as support. I think that is the fundamental decision here, is Facebook decided Hamas is on the DOI list, and we're going to continue, we're not going to make some kind of exception for this conflict because of pushback and all that stuff.

I think that was the right decision, but people can disagree, Facebook has the first amendment to go and make that decision and decide that's it, in an explicit editorial decision that they have made to not treat the two combatants here the same. Human Rights Watch, if they want to make the argument, they can go argue and say, "Hamas is just as legitimate as the state of Israel." Now, I don't think they want to do that, but that's fundamentally, that's their fundamental thrust here. I think they should make that argument if they want, but to try to get to it, the side of we are able to get a thousand pieces of anecdata by asking people specifically for data that supports our conclusions is a little bit of a dishonest way of doing that.

Evelyn Dueck:

Yeah. I don't know that they need to go so far to say Hamas is as legitimate as a state. I think that's intentionally provocative to say that that's the argument that they have to make. I think that they can make the argument that because Meta is being risk averse, including in the light of years and years and years of government pressure to crack down on terrorist content, they are, not only intentionally repressing and censoring Hamas-related content, but they are not being careful enough about extremely legitimate, extremely important, extremely well-protected political speech advancing pro-Palestinian voices, which is what a lot of this report is about as well. It's that this is the collateral damage in Meta's side taking that is not Hamas speech, but it's because the classifiers aren't as good on Arabic language content or it's being overbroad.

There have been examples over the course of this conflict where Palestinian flags and things like that we're setting off censors, classifiers because of just being hypersensitive in the context, which again, might be an understandable decision because there's so much going on these platforms, they don't have time to do case-by-case analysis of every single post or whatever it is. I don't know that it's all just about the pro-Hamas content necessarily, I think the point that these advocates are making is that.

Alex Stamos:

But that is the fundamental thing. This is not new. A lot of Arabic language stuff was overcensored during the ISIS era, but the difference was that support for ISIS was not popular on college campuses. That is one things that's changed here is that there's more symmetry in the support, it's not completely symmetric. But there's more symmetry in the Western countries that are very influential on Facebook's policies and within places like Human Rights Watch. There's a lot less sympathy in Human Rights Watch for ISIS than there is for Hamas, clearly.

In this situation, Facebook is, at least from a policy perspective, not treating Hamas any different than ISIS, and I think that's the fundamental thing that people need to talk about here is, should there be a DOI of like, "Oh, well, we think you're kind of a freedom fighter." I don't think so, but that would be a more honest argument here because that is the fundamental issue here, which is if you treat one of these sides as terrorists and the other side is a legitimate state fighting to protect its people, then you're going to get a significant asymmetry in the enforcement of rules.

Evelyn Dueck:

Yeah, or alternatively, Facebook has and Meta has long had bias against or has been less accurate, less rights-respecting in Arabic language content on its platforms since the beginning, which is probably true. It is now in this moment of global awareness and scrutiny of how it's moderating this content that we are seeing these issues, again, being thrust to the forefront where people are concerned about, again, not just pro-Hamas content, but about pro-Palestinian voices on these platforms.

The other thing, I think, the other really important part of the report that I think gets to this asymmetry as well that we've been talking about is where the human rights report mentions government influence on Meta, meaning the Israeli government influence on Meta and the way that that is asymmetric, and this was something that I think that the oversight board was completely blind to in its decisions, but is a really important issue as well and something that people have talked about as well, where Israel has this cyber unit that sends to Meta flags content for Meta content take down requests, and in the Human Rights Watch report, they note that there's been reporting around these take down requests since October 7, and there's been a compliance rate of 94%, which means either that Israeli officials are extremely accurate in what they're suggesting breaches Facebook's or Meta's rules or that there's a deferral or a deference there to Israeli flagging.

This is something we've talked about at length. Its jawboning concerns and government influence over content moderation concerns in the US context and in the EU context, and I think it's something that we need to be alive to in the Israeli context as well, where that should be something that there should be much more transparency around and making sure that there's not undue deference to governmental flagging of content.

Alex Stamos:

Well, and I think this is right back to the proposal you and I made a long time ago when Musk took over Twitter, which is if you really care about government overreach here, is that each of these companies should have as close to possible real-time database of government requests for takedowns. That there's no real outside and you have exceptions for CSAM and some of the stuff that's clearly just illegal. But for this, if you could see a stream of this is what, not just a percentage, but this is what Israel asked to be taken down and this is what was taken down, and you can compare that to Saudi Arabia and Egypt and Turkey, all of whom have numbers in that report as well, then you could see whether there's asymmetry or not.

What you'd probably find is that these companies are way too obsequious to state power no matter what. One of the reasons they'll never do this without some kind of legal requirement is that it is a core part of their government affairs shops is keeping countries happy, and that constant tension between the government affairs people and the product policy people is something that I saw a lot of, and it does seem that since I've left Facebook, the government affairs people have been on the rise on their ability to get Facebook to take down stuff on government requests, and I'm sure Israel is no exception.

Evelyn Dueck:

Right, yeah. That's a number one issue for transparency and getting to the core of these issues, and so hope that this is something that Meta has voluntarily committed to bringing transparency to. But this was two years ago that it said that it would create a database of these requests and there hasn't been any progress as yet, so I guess we'll just keep waiting. Good thing nothing important has been happening in the past two years about government pressure on platforms. All right. Moving to the every platform has content moderation problems corner, this week, it's Substack, which we've talked about before. This one has been an ongoing controversy for the last couple of weeks. Alex, do you want to give us the lowdown on this one?

Alex Stamos:

There's been a brewing controversy since some of the executives of Substack in their own podcast invited a person with what one may call euphemistically problematic white supremacist views, I think, very explicit. This is a guy, I'm not going to give him too much airtime, who effectively has two kinds of personas, and he has what used to be a somewhat secret persona that was very explicitly, pretty much pro-Nazi effectively, not just your standard racist, but really, really out there, which was different than his public persona, and so he was featured in their podcast, but that triggered another round of discussion around Substack's policies, which is Substack has some substacks that are extremely racist, that are really nasty. They have substacks that are full of total crazy conspiratorial stuff. They have substacks that I've seen that like to slander people continuously and say lots of untrue things about people and to drive harassment their way.

The difference between Substack and other platforms is that they are revenue-sharing with these people. The other thing that this has opened up is that the Substack founders had this whole grand theory of the universe that everything bad that's ever happened online is because of online ads, so if they didn't have online ads, there'd be no bad content. Well, it turns out shockingly that's not true, that people have had problematic speech and risky types of speech for a long time, and that one of the ways that they monetize that is that you could directly take payments from your fellow travelers. In this country, we have a First Amendment to do that, but companies don't have a obligation to then actually take the money and to take a percentage of it and to keep it as their own profit, which is what's happening here.

There has been kind of another blowup, and as a result, the Substack folks wrote another blog post basically saying, we're not going to change our policies. We need diversity thought, yada, yada. The best take down of this was by Popehat, Ken White, who's an excellent, a real libertarian, but also has a real good nose for this BS, where you effectively have people like this draping themselves in the First Amendment when they make, one, they are profiting from the speech, so this is not about government action, this is of should they make money from taking 20% off the top of Nazi subscriptions, right?

It's like, if the John Birch Society, they're allowed to use USPS, but you didn't have to be like the postal annex didn't have to have their employees go take extra money to go help them package up all their crazy newsletters. That would be a private decision and they're mixing those things up here. I think the Popehat article is the best because what they also point out is that these guys have policies that go well above and beyond what the First Amendment would allow a government to do, so they are violating whatever their claims are already, they've just decided that the super racist speech is worthwhile for them.

Evelyn Dueck:

Yeah. There's nothing more frustrating than exactly as you say, people who try cloth themselves in the glow that the First Amendment has culturally and wrap themselves in that language when it's just completely facetious. The blog post is, we believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas with their power, really trying to invoke this rhetoric from First Amendment law and lore in both senses of the marketplace of ideas and that the way that you defeat bad ideas is bad speeches with more speech, with good speech.

But exactly as you're saying, the founding fathers weren't concerned with the right to keep a cut of newsletter subscriptions, they were concerned with state power and government censorship of people's speech. There's your first issue is that this is a business and nowhere in the blog post the Substack guys mention the money and the business interest here. Exactly as Popehat points out, Substack has content moderation guidelines. It has rules against hate speech, it has rules against doxing, it has rules against sexually explicit content. All of these things-

Alex Stamos:

Right, which is the big one, because it's like they wipe out all porn. Basically, all nudity is illegal, is not allowed on Substack, which is, for some of these things, you could argue like, "Oh, well there's First Amendment exceptions for hate speech, and we're going a little bit beyond," but that's one where they've just decided we're not going to have porn. That is their right. But to argue that that is not the kind of free expression, that they're still champions for free expression when they're wiping out all nudity is just ridiculous.

Evelyn Dueck:

Yeah, right. There is no First Amendment exception for hate speech. That's the stuff that's also protected, and Porn, like all of this stuff, if you're really talking about a First Amendment standard, we are so well beyond that. But I guess the rhetoric is an attempt to cloud the issue when really the thing is this is a business proposition and this is a brand.

Alex Stamos:

In the end, Substack is doing very poorly financially is the expectation. They've had down rounds, they haven't been able to fundraise, they're losing a lot of money. They had to do a very embarrassing fundraising drive from where they sold stock to basically a GoFundMe for themselves, and they raised a tiny fraction of what they wanted to raise.

A number of people have pointed out, this is also just indication of how bad things are there that they can't give up the Nazi revenue. Ken White in the Popehat thing that is on Substack, I think he makes a good point which is the only way that Substack is really going to change their mind here is if they face an exodus from the people who don't want to ply their wares inside of a Nazi bar. We've seen this spiral before that when a platform stands up and says, "Oh yes, we're defensive of speech, but specifically defensive of this kind of very edgy alt-right content," that it gets flooded with that kind of content and it becomes a feedback cycle, that they don't get a general distribution to people, they specifically attract folks who are then looking to monetize in the platform.

This also should be the nail in the coffin of the advertising-makes-everything-bad argument that one of your former colleagues at Harvard has made her entire career on. I shall not mention her name because I've got enough problems as it is. But there's people who, especially from the East Coast, who have made these arguments who that were never factually correct and now just look really silly, because it turns out, another way you can have bad content is if you take a credit card and take some of the money off the top and then give rest of the money to the person for their speech.

Then, that creates a huge amount of audience capture in a way that advertising does not. That if you have 5,000 totally rabid crazy people, this is what we see with Alec Jones. When Jones was kicked off the big platforms, he became crazier and crazier because he became addicted to the money from the nuttiest people who were willing to pay him in the first place. I think Substack is a fantastic demonstration of the audience capture phenomena of, because you don't have the broad set of advertisers who are supporting you, that you go deeper and deeper in a rabbit hole for a smaller number of people who are willing to directly pay you.

Evelyn Dueck:

Yeah. It'll be interesting to see what creators do. Hundreds of them wrote an open letter to the Substack founders and leadership asking why they were adopting this approach. I've seen plenty of them talking about the idea, including Ken White in his Popehat thing about what they might do next and if they're looking for alternate homes. It'd be great to see them voting with their feet, so to speak, and putting real pressure on the platform here. One of the stories this year, we spent a lot of time this year talking about the Reddit revolt and the way in which the Redditors tried to, the creators on Reddit tried to exert influence on the platform leadership, and in the end, that petered out, and so it'll be interesting to see whether creators do have leverage here or not. I suspect that there's lower barriers to exit on this particular issue?

Alex Stamos:

Right, and they're right. Substack has created a model here, and so there are a number of platforms that you can do effectively the same thing. For the people who are smart, they've been doing this under their own domain and Substack, to their credit, has always said, "You own your mailing list, you own the customer relationship," and so moving off the platform is quite easy relatively. Yes, whether people move off, because there aren't a lot of economies of scale Substack, they do have a recommendation algorithm and all that kind of stuff, but it's not how most people find the content. You find it from elsewhere. You find it from Twitter or X or Threads or YouTube or something, and then you go and you, Substack is really, they're there for the money and they're there for the hosting.

Evelyn Dueck:

Right. Okay. To round out, we should talk about looking ahead for the next year, and my looking ahead for the next year coincides with the legal corner for the week. Thank you. In the past couple of weeks, we have another entry for the NetChoice restatement of the law that is developing, which is that NetChoice-

Alex Stamos:

Another entry for your future class, it's like law 274 NetChoice. That's it.

Evelyn Dueck:

Exactly. The topic is the NetChoice cases. NetChoice has brought a challenge to the Utah social media law that we've talked about on this podcast over the course of the year, that requires parental consent for the use of a social media account for minors, where minors are defined as people under 18, and I think, as I mentioned at the time, you can drive a car and get a driver's license at 16 in Utah, so that's an interesting value judgment there.

Alex Stamos:

I'm a little worried to look up the age of a number of things in Utah. Actually, I think that could get us in real trouble if I started just, yeah.

Evelyn Dueck:

Okay. Moving right along. The law also imposes a curfew for these people between 10:30 and 6:30, so I think, NetChoice has an extremely strong case here, but it's one of these many cases that is now bubbling up in the courts around the country. We've got Arkansas, Utah, California. We've got the TikTok ban in Montana and Texas. My prediction for the next year is like, we have these Supreme Court cases that we're going to be talking about at length, I'm sure, with the jawboning and the Texas and Florida social media laws. But it's also all these other laws around the country that are slowly bubbling up through the district courts and courts of appeals that are really sort of going to be the forefront of First Amendment issues and it's going to be keeping us all extremely busy, I'm sure, in the next 12 months.

Alex Stamos:

It turns out, you can get married in Utah at 16 or 17, and in fact, you can get married at 15 if a judge approves it, which has happened a couple of times.

Evelyn Dueck:

Okay. I don't think a judge can-

Alex Stamos:

But you can use the internet.

Evelyn Dueck:

Yeah. I don't think a judge can approve you to use a social media account. That one you need your parents. No judicial override.

Alex Stamos:

Can my husband and I have an Instagram account? Nope. No, but enjoy making a life together.

Evelyn Dueck:

Right, exactly. What are you watching for the year ahead, Alex? What's your big prediction? What's the big story? What's on your mind?

Alex Stamos:

This is pretty generic, but 2024 is one of the craziest years ever for elections globally. We obviously have a huge win for the United States. You've got India, you've got for the European Parliament, a bunch of European countries, a bunch of countries in Asia that are interesting. I think we have Taiwan happening really soon. All kinds of really important elections happening, and that's at the backdrop of an overall retreat by companies around election integrity.

The political pushback has been almost uniquely contained to the United States, but again, the US being incredibly influential on these companies and they're mostly American executives, that trying to explicitly keep elections from being torn down online or influenced online by means that violate policy by things like coordinating authentic behavior, it does seem that lots of platforms have backed off on that, X being the most obvious example, but they have also, as we've talked about, set the bar so incredibly low that you can back off by 20%, 30% if you're Meta or Google and nobody notices because you're still stratospheric doing better than X, which is just straight up allowing Russian troll farms now effectively.

It's going to be a much worse area than 2016, because when all this stuff was going on in 2016, there weren't teams looking for it, there weren't governments looking for it, but the number of actors was much smaller. What's happened since 2016 is people, like the Russian campaign in 2016, has been dissected and discussed over and over again by lots of people, including myself, and lots of countries haven't been angry, they've looked at that like, "Oh, I want that capability." Every major country, including the United States, now has propaganda capabilities online and have invested heavily and lots and lots of countries are going to be involved in the US election and the European elections. This is especially to the backdrop of the only way Russia wins in Ukraine that doesn't, just winning for them right now is a tie.

The only way that they can have any victory more than just holding Crimea in a little bit of DNR and LNR is for Europe and the United States to drop support for Ukraine. These elections are not just about domestic politics, they're about the major, the bloodiest war that Russia has been involved in since World War II, and so, from them, it is a straight up existential defense support project. Yes, I'm terrified what's going to happen this year because it is, especially the US election, and not just at the presidential level, but as you can see, as we've seen here, individual members of Congress have a lot of influence and individual senators who have the ability to things like keep anybody in the military from being promoted for months, those individual, I think we'll see a lot more activity at lower levels because the presidential, that's going to be based upon inflation and crazy, humongous stuff happening.

Does Joe Biden have a stroke? Does Donald Trump say something crazy? Those are the things that are going to influence the presidential and everybody's going to be talking about it, and so while we're all paying attention to that, I'm really worried about what happens in specific areas in which there is a candidate who is perhaps less excited to defend Ukraine, and therefore, they get a huge amount of local support that that looks like it's legitimate. You could see that over and over again across the world, and the fact that the companies are backing off at the same time that they really should be staffing up and be able to do these things in lots of languages, I think is going to be really problematic.

Evelyn Dueck:

Well, what a cheery note to end the year on the grim. It does makes a good case for listening to this podcast next year, obviously, because these issues are going to be at the forefront of a lot of these issues in the coming year.

Alex Stamos:

How about your predictions? What are you going to see?

Evelyn Dueck:

I'm watching the legal stuff. What a year the past year has been in terms of legislation being passed. We've seen the UK pass its Online Safety bill. We've seen the DSA come into force and the enforcement actions and things are going to be rolling out in the next year, watching this X investigation, but also, all of the other platforms. Then, all of these state cases in the United States as well as these Supreme Court cases in this term, the TikTok bans. There is so much going on, and so those are the things that I'm going to be watching.

I don't know how they're going to pan out, honestly. I don't see nice, neat resolutions to any of this. The answer is not going to be one nice, harmonious legal regime that is easy to comply with in every jurisdiction and makes a lot of sense. It's going to be, I think, a complete mess, and it's going to take a while to work all of this through. That's going to be something that we're puzzling through in the next year. But I'm looking forward to doing it on this podcast every week or so at random intervals with you, Alex. It's always a pleasure and it's a real highlight of the week every week, so thank you very much for doing it with me.

Alex Stamos:

I'm going to make one positive prediction-

Evelyn Dueck:

Ooh, excellent.

Alex Stamos:

... in our sports quarter. The Sacramento Kings will make the NBA playoffs. I disagree with the Vegas line makers right now. Kings are in it. I don't think they're going to win the playoffs, but I think they're going to make the playoffs for a second year in a row. It's a great time to be a Kings fan. Go Kings, go Sacramento. Sacra Tomatoes, the real ... and also, I think Sacramento will be the, for a little while now, the best basketball team in Northern California, which is kind of a crazy thing because the Warriors are completely melting down and falling apart.

Evelyn Dueck:

See, I don't even know enough to know how outrageous or outlandish those predictions are, how high risk they are, but I guess we will see over the next year. Okay, and with that, this has been your Moderated Content weekly update. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent. This episode and all of our work this year wouldn't be possible without the research and editorial assistance of John Perrino, policy analyst extraordinaire at the Stanford Internet Observatory. Thank you, John, and is produced by the always wonderful and flexible and helpful, Brian Pelletier. Thank you, Brian, for everything this year, and special thanks also to Justin Fu and Ryan Roberts and Rob Huffman over the year. Happy New Year everyone and see you in 2024.

Alex Stamos:

Happy New Year.