Moderated Content

MC Weekly Update 12/5: THE MODERATED CONTENT FILES

Episode Summary

In this explosive, never-before-heard episode, Evelyn and Alex discuss the trust and safety implications of the AI text-generator ChatGPT, the expansion of a cross-industry database for removing non-consensual intimate images from platforms, a legal challenge to a new online hate speech law in New York, and….. The Twitter Files, of course.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Evelyn Douek:

Okay. I mean I guess we have to start with the Twitter files.

Alex Stamos:

The Twitter files.

Evelyn Douek:

That's right.

Alex Stamos:

In a world...

Evelyn Douek:

That's right where one company controls access to the public sphere.

Alex Stamos:

Where entire political elections are about a single new post story.

Evelyn Douek:

Exactly. Welcome to Moderated Content Weekly News Update with myself Evelyn Douek and Alex Stamos. Moderated Content is audio content that is hosted and organized by a moderator or host. This person guides the conversation and topic of the podcast may invite guests or experts to join the discussion. The content typically revolves around a specific topic, or theme, and the moderator ensures that the conversation flows smoothly and stays on track. That catchy description of our very exciting podcast brought to you by ChatGPT. Seems relatively accurate. Alex, so what do you think?

Alex Stamos:

Yeah, so one of the big things people have been talking about this week is this release of ChatGPT. It's the next version of a large language model by OpenAI, OpenAI being a company based here in the Bay Area, focusing on AI. They famously released GPT-3, which was kind of a pain to interact with, you could make GPT-3 do some really cool stuff, but it required sometimes a significant amount of tuning. This next version is incredible. People are asking it to write poems about random topics and it sounds like that you can make it effectively write onion articles. It is, as a number of people have pointed out, it's often wrong about things, but it sounds very confident and it's kind of an amazing BSer is effectively what they've built here.

Evelyn Douek:

Yeah, no, it's incredible. I've seen people designing exercise regimes and people putting in ingredients for recipes and getting curry recipes back. So I just honestly can't believe it. Sometimes I think they're mock ups or fake given how good they are. But obviously technology for good, technology for evil, this is just a tool, blah, blah blah. How concerned should we be about malicious uses of this? Obviously it is always as everything a Content Moderation problem. So yeah, what do we see?

Alex Stamos:

So first you have to give OpenAI credit and that OpenAI I think is the most aggressive of these AI companies of thinking about the uses of their tools, having an ethics team, having a bias team and trying to red team these things out. And so a lot of the basic things you would expect, the harmful things people might try to do, it does detect and stop. That being said, these large language models are incredibly complicated and once they released it out to the public, so this, unlike GPT-3, they did not gate this to a specific researchers.

We had to apply for GPT-3 access. It was a big deal. This one, they just kind of blew open to anybody who created a login for free. And what you saw was people pretty quickly figured out ways to bypass the controls and to make the AI say racist things, say kind of harmful things, talk about suicide and such. And so there's definitely a possibility of pushing this model to reflect the negative portions of society that's trained on, is effectively trained on the corpus of all of the public output of human beings. One of the more interesting ones I saw was GPT-3 has the ability to write often correct code, especially Python, of which there's lots of training Python out there and such. And so people you can't ask it what are the smartest races? You can't ask it like a straight up racist question.

Evelyn Douek:

That seems good, yeah.

Alex Stamos:

Which is good. But what you can do is you can ask it, please write me a Python app that takes the gender and race of a person and decides how intelligent they are? And it will give you a Python app that is as racist.

Evelyn Douek:

Yeah, I was going to say, I'm sure the answers to that are excellent and very politically correct.

Alex Stamos:

Right? Yes, yes, yes. The app definitely does not output a soliloquy on IQ being a shallow measurement and such. No, it is as racist as you expect, which is fascinating because that is something that I guarantee there's not a training data out there, there's not a Python there that gives that output. So it was both able to synthesize its knowledge of Python as well as the racism and misogyny it picks up from this huge training set and combine them together. And so we've seen examples after examples of pretend you're doing this, stuff like that get by the filters, which is I think going to be inevitable as these things get more and more complex. We talk about Turing completeness of programs being that you can't predict when they end. We are entering a world of AI being so incredibly complicated, the human creators of it predicting what actions it will take is well behind us. And so you can do some red teaming and such to try to find the obvious issues, but in the end, if these things are going to exist, you are going to be able to push them into corners that the designers don't want them to be in.

Evelyn Douek:

Right? Yeah. Classic content moderation issue in a new form that no matter what you think of the, it's a cat and mouse game and the bad actors will be one step ahead. Obviously one of the other concerns that we've seen people talking about is threats to journalism. You can put in pretty effective prompts and get whole articles about certain things. And then the flip side of that, and this might be ready for primetime already, is things like information operations and obviously that's something you're an expert in. So I'd love to hear how worried are you about this as a tool for that?

Alex Stamos:

Yeah, so our team at SIO along with some folks here in the political science department, we actually have a paper in review right now in which we have tested GPD generated disinformation against what we knew to be real human influence operations and the result is not great. And that is on the old model, academic publishing taking forever. We've been working on this for a year or so. And so I am sure jet this current [inaudible 00:05:39].

Evelyn Douek:

ChatGPT wouldn't take that long to write the article Alex.

Alex Stamos:

That's true. Maybe I can get ChatGPT to be reviewer too for us.

Evelyn Douek:

Right. Exactly.

Alex Stamos:

Yeah. So yes, there is good empirical evidence that these things will be very useful and I think the usefulness for that will be for people who are trying to do influence operations not in their native tongue. So if you are sitting in China and you want to have influence in Germany, instead of finding dozens and dozens of German speakers to go do your content all day, you can have one person who understands German society, understands German enough to read it, and then you can prompt large language model to generate German propaganda that blames covid on the US instead of China for example. And I think that will become a huge force multiplier for smaller influence operation shops, especially because ChatGPT, GPT-3, the quality would drop off pretty precipitously after the first couple of paragraphs. So the way these models work is they statistically think about what is the next most likely word based upon the constraints that have been placed on my model. And so as you get further and further from the beginning, they sometimes start to sound crazier and crazier. ChatGPT is, from my experience so far, has had much less of a drop off in quality, although they cap the output. And so if you're looking to write Twitter length or short pieces, it would probably be a very effective way to do so for an influence operation.

Evelyn Douek:

Provided you can find that one person that understands German society, they must be out there somewhere.

Alex Stamos:

I think the other thing that you and I have to think about, less me because I'm moving towards completely objective grading, but how many of law students are now going to have large language models generate their answers to questions, for example?

Evelyn Douek:

Well, yes, that would be a problem, but on the other hand how many law professors can use it to generate exam questions?

Alex Stamos:

Yes. Right.

Evelyn Douek:

So there are upsides and downsides to everything.

Alex Stamos:

Or law articles where you have to write 80 pages or something.

Evelyn Douek:

Exactly.

Alex Stamos:

You start with three and then have exactly large language model turn into 82.

Evelyn Douek:

Just write a paragraph here and there and then suddenly incisive analysis, just spews forth.

Alex Stamos:

We're joking, but the future of intellectual pursuits where you pay human beings to do things that are just them thinking and writing, really this is going to be fascinating in the next 50 years. And I don't mean fascinating necessarily in a good way.

Evelyn Douek:

Completely. And of course what are the first amendment issues that this is going to raise as well? If code is speech is the output of something like this first amendment protected, I'm sure that we are going to see those cases.

Alex Stamos:

And all the copyright stuff like GitHub is already being sued because their large language model trained on code uses other people's code that they did not license to GitHub for that purpose.

Evelyn Douek:

So in one sense, I feel secure in my research agenda because all of these legal issues are going to come up, but maybe ChatGPT is going to answer those legal issues for itself. Just argue it before the court and Justice Alito will decide the case. I'm sure with great technical acumen.

In a thoughtful example of content moderation this week. So in 2017 Facebook asked Australian users to send them nude photos of themselves and this was around the time of the Cambridge Analytica scandal. And so people did a double take. But fast forward to this month and Facebook and Instagram have been running a refined version of this pilot globally for a year now. What it does is if you send in photos that you are concerned about being spread on social media, they have created a database of these images that works similarly to how the hash database for the terrorist content works, where they give it a digital fingerprint and then check all uploads to the platform against that database and ensure basically that revengeful exes don't spread these images of you and things like that. So this could be potentially a really salutatory development and they're opening it up to many other platforms this week. TikTok and Bumble adopted it as well. And so content cartels for good. What's your reaction to this?

Alex Stamos:

Yeah. So I'm very happy to see this. So I was at Facebook in 2017 when this happened. I had nothing to do with it, but I saw... Well I had a little bit in that my team spent a bunch of time thinking about abuse cases and looking at the security of these images. So we did a bunch of work that in the original pilot you were uploading to Facebook via a form that had to be referred to you by a domestic violence abuse digital rights organization in Australia only. This was a pilot, it was never meant to be a ton of people. And we did a lot of work on the security of those images. This was actually kind of a radicalizing moment for me. The media reaction to this, like a minor red [inaudible 00:10:09].

Evelyn Douek:

Facebook's doing something, it must be terrible.

Alex Stamos:

Right? NCII, non-consensual intimate imagery is a really huge problem. If you work with young people, especially young women, you will find that a very significant portion of them have some story of an image of them being shared, being shown to people, being threatened with it. It is a huge, huge problem. And traditionally for adults, if you're over 18, you've had no recourse because those images, the copyright belongs to whoever takes the photos. You can't use DMCA, it's not CSAM, so it can't be forced down by that. You had no protection. And so there's this Facebook trial that the Facebook safety team and Tykime Davis and some other folks really led it up, pushed the idea of we're going to do this trial in Australia. Announced it and the media reaction was absolutely ridiculous. It was, "Facebook wants your nudes.", "Facebook wants to violate your privacy.", this and that and this that.

Only a handful of reporters. I have to give a shout out to Olivia Solan, Kate Conger, there were a couple of reporters who had actually talked to women to whom this had happened to. And it's not only women but it's a majority of women who are victims of this and saw that this was actually a positive thing in that this voluntary step to try to get your nudes taken off the internet was going to be really important. But the media reaction was ridiculous. And it made me extremely angry because these folks who were talking about tech companies being ethical, having to do things to protect people, when there actually was a proposal to really stop a problem, they made jokes about it and they tried to spin as a bad thing. And I think honestly the media reaction really slowed this down and it stopped other companies from trying it.

So all those media folks, I want to say Gizmodo and some other kind of sites like that, you should be ashamed of yourselves because you stopped this good thing from happening for a little bit. Fortunately the safety people have won through. What they've done is you can go to stopncii.org, this is a separate, effectively a gift CT like nonprofit that I think has been set up outside the companies is now the Facebook products, Facebook and Instagram. But also TikTok and Bumble are the first other partners in it. And so if you submit your image, it gets hashed. A human being does look at it. So a stranger that you do not know will look at your nude because you don't want people uploading pictures of Tiananmen Square or pictures of women without hijabs or you know, you don't want people manipulating this database for political purposes.

So a person looks says, "Oh yes, is actually a nude of the person who I think has submitted it." And then they put it in a hashtag database and it gets pulled off of these sites and I hope a bunch of other companies follow through. I have to give a lot of credit to TikTok and Bumble for having the guts to join Facebook in doing this. But anyway, a great thing, a big win for the Facebook safety team that's been working on this. And I hope the media folks who made fun of this five years ago kind of do some internal inspection of whether or not they're actually making the world better in this place.

Evelyn Douek:

Yeah, I also just really want to underline the safeguards that you mentioned. I think they're really important in terms of making sure that the database isn't abused. Like a politician isn't uploading incriminating photos of themselves to have them automatically removed. Or like you said, governments uploading it so that you don't see the protests going on in China, things like that, which are the obvious abuse vectors. And those are concerns that we've seen raised around things like the terrorists, the GIFCT database where there aren't necessarily all of those safeguards in place and we don't know what's in the database at all. So thinking about institutional design as you set these things up is really, really important. And it's great to see some of the thought that's gone into that. And for those keeping track, this is a great example .of non-illegal content in many cases that we're really happy that content moderation is taking place and these posts are getting taken down, or at least I myself am Hopefully that's not a controversial statement.

Speaking of governments and potential abuse, there has been a challenge to a recent New York law filed this week. So the law is a law passed in the aftermath of the Buffalo shooting where the New York State required social media networks to publish what is called a hateful conduct policy, describing how they will deal with hateful conduct and provide a appeal means for users around that conduct content. And it doesn't require platforms to take down that content necessarily, it just requires transparency around what they're going to do. I think this is a really interesting case. A challenge has been filed this week by Rumble, which is the alt tech version of YouTube and Eugene Volokh of UCLA. And just want to say that I'm going to have Professor Volokh on the podcast to do a longer discussion about this later in the week just to say that if we are concerned about potential abuse by legislatures in Florida and Texas, we should be also thinking about how legislatures in places like New York are potentially overstepping as well. I think this is a very different law, it doesn't require substantive outcomes. So the question is, is there so much potential for abuse built into the way that the law is drafted? And so it'll be really interesting to see how that case will play out and to talk to Eugene later this week.

Alex Stamos:

Looking forward to hearing it.

Evelyn Douek:

Excellent. And now for our weekly Musk segment. It's a perfect sound effect actually because it's basically what happens to me every time as we start this. That

Alex Stamos:

That was not a sound effect, that was actually a sound that came out of Emily.

Evelyn Douek:

Exactly.

Alex Stamos:

I don't know how she made that. That's amazing.

Evelyn Douek:

Yeah. Okay. I guess we have to start with the Twitter files.

Alex Stamos:

The Twitter files.

Evelyn Douek:

That's right.

Alex Stamos:

In a world.

Evelyn Douek:

That's right where one company controlled access to the public sphere.

Alex Stamos:

Where entire political elections are about a single New York Post story.

Evelyn Douek:

Exactly. What a different world we would be in if Twitter had not made this decision about Hunter Biden's laptop. I don't know if we have to do so much recap about the story. What do you think of the salient facts that we should cover?

Alex Stamos:

Yeah, so couple days ago, Elon Musk previewed that the truth, the incredible truth around the Hunter Biden laptop story would come out. As anybody who listen to the podcast know Twitter made a decision that when the New York Post came out with a story about things that were found on Hunter Biden's laptop that they said was left at a repair shop, that Twitter banned that URL for I believe about 48 hours or so. Facebook down ranked it for a short period of time and that, but never blocked it and people lost their mind over that saying it was censorship and such. And since then, Twitter let it back up and because of the Streisand effect, it became a huge thing that was talked about incessantly in the last days of the election season. So the idea that people did not get their story because of it, not true.

Now I have said this multiple times, I think Twitter made a big mistake here in that they stepped outside the bounds of where their policy should be and that they were trying to prevent a influence operation by a foreign adversary. Everybody would assume the Russians in the situation like this, against the US media and that that's not Twitter's responsibility. The New York Post has to be responsible for the New York Post and Twitter should not take that. And so I think it was a huge mistake. I think it was a big learning lesson, but that has been the core of the theory of why Donald Trump lost, which is just hilariously. The idea that people did not know about this story. Twitter doing this gave it much, much more legs than it would've had otherwise. Everybody's heard about Hunter Biden than I expect. This is all we're going to hear about next year with the Republican Congress is Hunter Biden and all the crazy stuff on his laptop.

Evelyn Douek:

I just want to underline what you said as well, and I've said this as well, I think it was a mistake. I think that the policy basis for these decisions for both platforms, both Facebook and Twitter, and they used different policies by the way. One was Facebook was looking at disinformation and needing to fact check the story and Twitter relied on its hacked materials policy. It was very sus and it wasn't a good look. And you can see that in the Twitter files-

Alex Stamos:

Twitter files.

Evelyn Douek:

...where you have these executives within Twitter being like, "Are you sure this is the right, can we justify this decision?" And basically I think that one of the things that I was thinking about reading that is do we want comms people involved in making policy decisions about content moderation? In fact, in this case they were the ones pushing back and saying, "Are you sure that this is something that we can sell?" But I do think there are really important distinctions to be made between the people who should have responsibility for making these policy decisions and the comms people who should be explaining them.

Alex Stamos:

So why this is back in is this weekend Musk says that the truth is coming out. Previews it, delays. And then Matt Taibbi, a independent journalist, famous for attacking Goldman Sachs for Rolling Stone and such, used to be on the left, I'm not sure what you would call him now, perhaps a little bit of horseshoe theory happening now that he's a big fan of Elon Musk's. But Tabby comes out with his humongous Twitter thread. Instead of his Substack I think Musk forced him to put it on Twitter. And in it talks about the internal emails involving this decision. And the big outcome is the decision was made internally by Twitter. Twitter believed it possibly was a Russian influence op that was the hacking leak operation that actually did happen in 2016 that I personally saw the evidence for. So I think that's one of the problems here is that a lot of the people yelling about this believe the Russians did nothing in 2016.

And so they believe that it's all a fantasy that there's ever been foreign interference in US elections. That is just plain wrong. You can't talk to those people unfortunately, realistically. And so he released all this and in it he says that there's no evidence of government interference. The big theory has been that somewhere in the deep state somebody reached out to Twitter and forced them to take this down. And what it showed was the FBI had given multiple briefings to Twitter and the rest of the tech companies and said that they were worried about a hacking leak operation, but did not say anything specific on this laptop. Were not the ones that asked to be taken down. Somebody from the Biden team, of which Joe Biden was not in government at the time. He did not have the force of law behind him, but he was a political actor and somebody who's obviously quite possibly going to be the next president of the United States and therefore had a lot of power.

His team did point out a bunch of URLs they wanted Twitter to take down. People have looked in the archives of what those URLs are and they're all naked images or videos of Hunter Biden. So Hunter Biden, whom I'm not going to defend him, not a great guy, has made a lot of mistakes, seems to be a real pain in the for his dad, especially had naked photos with women and him like smoking stuff and all this kind of stuff. But Twitter has a policy that you can't put up naked pictures of people without their permission. We were just talking about that. That's NCII. They do not have an exception for newsworthy figures. If Musk wants to consider an NCII exception for newsworthiness, then he could do. So you got to game out what happens when that's the 19 year old daughter of a candidate and not the 40 year old son.

So everybody's got to be honest about what happens in that situation, but that there is no evidence in any government involvement and no evidence that us at the Sanford Observatory had anything to do with it. This is then this other conspiracy theory we've been dealing with now is that people are giving the idea that we had something to do with Hunter Biden's laptop, which we absolutely did not. So anyway, Taibbi kind of just proved that people had, Twitter thought it might be that they had huge internal arguments, they took down the nudes, but they left up all these tweets about the laptop and they eventually unbanned the post story and it became a big deal and it proved effectively nothing.

Evelyn Douek:

Right? One of the things that's so frustrating about this is, as we were talking about before, it just gets so politicized and then people react to the person that's bringing this message and it sort of makes the debate much more clouded because I think there are legitimate questions to be asking about platform government relations, especially in these politically heated moments in the lead up to elections. And for my money, we shouldn't be waiting for internal emails to come out or whatever to know exactly the nature of that relationship. Now if we look at the nature of that relationship, in this case, Taibbi said himself, there's no illegitimate pressure here and certainly nothing that would meet the First Amendment standard, very high standard of coercion from the government that makes it an improper act that violates the First Amendment. But these relationships should be much more transparent. And you know tweeted about this, you gave a number of policy recommendations that you think could help with addressing this concern, which there is a nub of truth to.

Alex Stamos:

Oh well there's a totally legitimate concern about government's pushing platforms to censor speech. Absolutely. And we should be concerned in the United States. I'd be interested in your take at what point it becomes a First Amendment violation and whether that's been litigated or not.

Evelyn Douek:

So there have been a bunch of cases where people have tried obviously, and they've relied on very general statements like the Surgeon General saying "Facebook should take things down.", or Joe Biden saying "Platforms like Facebook's killing people." And that is not sufficient, those sort of general public statements. That I think this is going to be litigated more and more. Alex Berenson is bringing a case soon based on internal emails that he's found about pressure from the White House saying, "Hey, what about this particular tweet? Why hasn't that violated your policies?" But the current First Amendment bar is very high, very, very high. It basically has to be coercion and there's no choice. And so none of this is even remotely close.

Alex Stamos:

I would be to totally fine with a norm that sitting people in the US government should not be able to say, "We think this is a bad tweet from this American citizen." I think that's a totally reasonable norm.

Evelyn Douek:

We've talked about that before in relation to the disinformation dozen where democratic senators have written letters to platforms saying, "Hey, look at these 12 accounts. Why are they still on your platform?" That really gets very close to the line of being... If you can't prosecute that speech because it is legal speech, why are you trying to get platforms to do your dirty work for you?

Alex Stamos:

So Elon, if you're listening, I'm sure you're a weekly listener, I have a proposal. If this is something you're really concerned about and I think it is a legitimate concern, then there's a couple things you can do. One, you can say that instead of you giving access to very limited emails to one journalist, that Twitter will publicly release any communications you have with any governments or political actors. So candidates, parties, and the like, related to content moderation globally. So if Modi's team wants you to take stuff down in India, you're going to release it. If you have communications with the ministry state security of the People's Republic of China, or really any part of the Chinese Communist Party about the fact that Twitter labels media outlets in China, you release that. I would love to see all of that content and then that would make people believe that this isn't just a political stunt, that this is actually a legitimate concern around interference.

Another thing that we proposed when I was part of the Aspen Commission on Information Disorder was a database of all moderation decisions. Twitter could create a database of the last 30 days of everything taken down. Now the legal and privacy issues here are actually quite complicated, but at a minimum you probably could provide that to researchers under NDA and they could go through and you could say "This was taken down for what we call a disabled code because of hate speech, because of terrorism, because of this." And then people can go look for evidence of political issues. Now that wouldn't help in this specific case. Everybody knew about the New York Post thing about the Hunter Biden laptop decision, but for more subtle things that would be an issue. And then third, you could rebuild the team that is now effectively completely destroyed that looks for government influence on the platform. So if Musk wants to take those three things, I would take him very seriously that this is something he cares about.

Evelyn Douek:

Yeah, I'm sure it's just a coincidence that so far the release of internal communications about government platform relations, curiously only goes up to the date before. All of them are from before Musk's acquisition. But I'm sure that episode two of the Twitter file-

Alex Stamos:

The Twitter Files.

Evelyn Douek:

...will include all of this transparency and really bolster the case. So in short a mess, no doubt we will have more to talk about next week.

Alex Stamos:

It's been interesting because Musk has now, every time a political decision's made, people are going to demand transparency of who asked for it. And right now this is an issue, right? In Brazil, it looks like Bolsonaro was going to accept the election. He has changed his mind. He's making the argument he was rigged and Musk was tweeting things about he needs to look into whether or not Twitter was anti Bolsonaro and pro Lula. I would love to see the communications because there are photos of Musk shaking Bolsonaro's hand and coming up with deals for metals and importing cars into Brazil and such. I would love to see all communications between the president of Brazil's offices, all Brazilian government and Twitter and specifically Musk. I think that is a great kind of transparency. Musk is right. We should look out for government interference.

Evelyn Douek:

Yeah. That presumes that there is some principle decision making or desire for consistency here. It's unclear what his guiding principle or how much thought he's even putting into any of this. This week was when he suspended Kanye's account. Ye's account again for posting a swastika and on a Twitter space he said, "Well that was obviously because it was illegal.", which lets him hide behind his-

Alex Stamos:

Evelyn, is it illegal to post a swastika in the United States of America?

Evelyn Douek:

That's why Ye's, been arrested and put behind bars obviously because you just can't release that kind of content. Famously one of the most famous First Amendment cases is that the Nazis were allowed to march in Skokie, which was a heavily Jewish populated suburb outside Chicago because the First Amendment allows freedom for the thought that you hate.

Alex Stamos:

So we can get off of Ye arrest watch then?

Evelyn Douek:

Yeah, that's right. Kind of related to this though, just one last thing on Twitter because I think it provides a good lesson for thinking about how to measure performance of content moderation more generally. So there was a news report in the New York Times this week about a big spike in hate speech on the platform since Musk's acquisition. And meanwhile, Musk is tweeting graphs that show a steep decline in hateful speech. And you had some tweets about how this shows some of the problems with measurement in content moderation.

Alex Stamos:

Right. And so this is where I actually have to take Musk's side here out of fairness, I think one Musk and Yoel Roth before him, it looks like Musk is just showing kind of more marketing versions of the same graphs that Yoel Roth showed before he quit Twitter, which are based upon impressions. Impressions of how many people are seeing hateful content. The report that was then amplified in the New York Times, and effectively the vast majority of the mainstream media just straight up took this report from the Center for Countering Digital Hate and the Anti-Defamation League. Seriously, and I don't think it's very good research, I'm going to be honest, I don't think they did a very good job. Those are both activist groups. Those are groups that have very specific goals. They're not independent researchers. They still have not released any real methodology. If you look at the methodology section for what they've done before, this is something that would be desk rejected from the Journal of Online Trust and Safety.

It is not a methodology that is all scientifically accurate. And you have to be super careful looking backwards on a places like Twitter because for any kind of bad thing you look for, you will always see a reverse decay curve because moderation decisions that are made now have a retroactive effect. So if somebody finds a troll who's saying a bunch of racist things and takes them down, that content might last for 12 or 24 or 48 hours or maybe even a week and then be taken down retroactively. So you'll always see this kind of decay looking backwards. The other issue is they were just looking for bad words. These are words that sometimes are used in large amounts of pop culture. So if people are posting things like lyrics or quotes from movies that they will include those words, those include words that in other English speaking countries are not seen as epithets. And so it's just the most kind of naive way of thinking of hate speech is just to search for specific words. I teach a whole lecture on hate speech, and this is like I start by talking about these are the kinds of effects you have when you just start moderating based upon it.

Especially like the anti-Semitic stuff, which is often much more subtle. If you look at the things Kanye West said, I don't think any of those things other than the swastika itself, but that wouldn't show up in a word search. His kind of anti-Semitic things he said did not include a specific word that you could search for. And so anyway, I don't think it's good research. I don't think the media should just accept this kind of research, especially from activist groups without asking questions and getting the methodology looked at.

Evelyn Douek:

Yeah, and we've seen this in the disinformation context as well, which you know all too well, this kind of disinformation industrial complex where there is a demand for these kinds of studies and academics who love getting... This is a great headline. Hate speech spikes since Musk's acquisition of Twitter. So there is the demand, if you give the supply, you can get a study out there and a citation, which is gold for academics, but it's bad incentive structure.

Alex Stamos:

And I'm not saying it hasn't spiked, but this does not prove it anyway. And if we're going to criticize Musk, it needs to be based upon evidence. It needs to be based upon good empirical work. This kind of thing is what destroys people's belief in the media and in researchers and people who are talking. It just makes me very sad because there's a legitimate argument we have here, and every time you post one of these studies that isn't based upon good science, you end up making that argument much harder to have in the future.

Evelyn Douek:

Right. Plenty of legitimate things to criticize Musk about. No need to give him this easy out.

Alex Stamos:

Well, I mean we are famous Musk defenders here on this podcast.

Evelyn Douek:

Yeah, that's right. Exactly. Okay, so with the Socceroos in the US out of the World Cup, I think we have to announce an abrupt end to our weekly sports segment. Unless you have some more positive news for us, Alex.

Alex Stamos:

Well, right now, if you want to go get tickets for the Paris Olympics, that is open. That will be a fascinating, there's a whole, I think it'll be not less about content moderation, more about just technical failure, that they're going to have this very complicated system for giving out tickets and a draw and all this kind of stuff. And so we could very possibly have a global version of the Taylor Swift Ticketmaster fiasco going on here.

Evelyn Douek:

I'm always surprised at what you come up with for our sports segment. You never know where it's going to go, but looks like we will keep the sports segment.

Alex Stamos:

It's going to Paris in 2024.

Evelyn Douek:

Excellent. And with that, that has been the Moderated Content Files.

Alex Stamos:

Moderated Content Files.

Evelyn Douek:

This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn't have been possible without the research and editorial assistance of John Perino, Policy Analyst at the Stanford Internet Observatory. It is produced by Brian Pelletier. And special thanks also to Alyssa Ashdown, Justin Fu and Rob Huffman. See you next week.