Alex and Evelyn discuss the Reddit blackout in response to its decision to start charging for API access; Dorsey confirming that India has threatened to shut down Twitter if it doesn't remove certain content; Spotify's Joe Rogan problem; Meta's new Covid-19 policies; and the latest round of DDoS attacks in state platform regulation legislation from Texas, Louisiana and Florida.
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:
Twitter Corner
Legal Corner
Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.
Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.
Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Evelyn Douek:
Are you sick and tired of other people knowing more than you just because they actually studied the area? Are you annoyed that human knowledge is vast and you can only obtain expertise in the tiniest pin prick of it in the limited time that you are on this earth? We have the product for you.
Debate Me, Cowards is a 12 step guide to finding your way to truth and happiness. The first pages walk you through the complicated process of how to set up a Twitter account, and it only gets better from there. Debate Me, Cowards! Order now for truth and happiness. First 100 orders get a free podcast mic.
Alex Stamos:
That's perfect. That was perfect.
Evelyn Douek:
Hello and welcome to Moderated Content's weekly slightly random and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek, and Alex Stamos. The big story this week is probably the strike and blackout at Reddit by many volunteer moderators after the company announced that it was going to start charging for access to its API back in April. At the time of recording on Sunday afternoon, there are still around 4,000 or so Subreddits that have gone into private mode in protest of Reddit's decision, according to reddark.untone.uk, which has been set up by people who are tracking this.
It's been playing out over the last week. There's been lots of news about this. The initial announcement of Reddit's decision emphasized the fact that there were lots of large language models that were using Reddit data to train their models. But as the debate has gone on, it's pretty clear that a big part of this actually is that the company has just gotten sick of third party app developers making profits while Reddit doesn't. Alex, we predicted that other companies would draft in Musk's week on reducing transparency when Musk himself, we've talked about this a lot, has turned off access to the API at Twitter.
I don't know if we could have predicted, but maybe we should have that CEOs would do it with so much open fawning for Musk's decision making and the way that he has been handling this whole thing. Because Steve Huffman, Reddit CEO, has been openly adoring of Musk in some interviews this week saying that he's really learned a lot about the way that Musk has taken over Twitter. Long story short, my takeaway is reaffirming that we can build a really good business in this space at our scale. Did you see this one coming?
Alex Stamos:
I'm a little surprised. As you and I have talked about all the time, Musk is creating the incentive structure for people to follow him and permission for them to do so. But if you look at the results of Twitter, it's been a disaster. In this case, trying to charge for APIs has no way made up for the revenue shortfall they have in other areas. Just to specifically cite, I want our business to be as good as Twitter's. It's just not the biggest goal in capitalism, I think. It's not something I'd want to hear from most public companies, but that is what's driving this, is that Reddit wants to IPO. They have taken a bunch of money in private investment over the years.
They are still not profitable. Reddit is a really cool website. I really like it. It's a great site. It's a terrible business. They make Twitter look good from an execution perspective, from a revenue perspective. I think Reddit has demonstrated over and over again how hard it is actually to monetize attention. If you want to be profitable in the space, you got to be a TikTok or a Meta where you have a team that's just incredibly aggressive in figuring out ways to monetize and driving growth and driving engagement. In this case, because of that desire to have some revenue bumps before the IPO, Reddit's looking for ways to do that.
They make most of their money by selling ads. One, they'll be adding more revenue from the API. I think there's a legitimate argument that they don't like the fact that OpenAI, for example, has trained all of GPT on Reddit, and so that all of this work that Reddit has done to create these communities is being used for free by a number of different large language model vendors. But they didn't have to blow up the third party apps. It's clear now that they have not adjusted in any significant way. The goal was to blow up the third party apps. I think partially because those third party apps often don't sell ads.
Some of them actually sell their own ads. I think there is some legitimate argument here for Reddit. It is expensive to run these services. It is not cheap to do it. People talk about, yes, the labor is mostly in the content creation and that comes from individual users, and that's totally true, but it is not a free thing to have tens or hundreds of thousands of servers around the globe to maintain those things to write the software. While some moderation is done centrally, Reddit still has to have a pretty significant trust and safety team. There is a big operational component here that can operate in lots of different languages and needs the technical backing to be able to backstop the fact that they've got bad Subreddits.
If you have a bad Subreddit, that means the moderators are not somebody you can trust. I think I'm a little more on Reddit's side on this than some other folks. That being said, they've mismanaged it incredibly just the way they've interacted with their user base, the way they've interacted with moderators. It's just a incredible own goal and will eventually be a Harvard business case study how not to roll out a change like this.
Evelyn Douek:
Right. Yeah, exactly, a case study in corporate comms that's been a bit of a dumpster fire. It's also a super interesting study in social media governance because there's a distinctive feature about Reddit, of course, which is you're right, that there's a centralized trust and safety team, but it's also much more decentralized in the sense that it relies on a lot more community moderation with its subreddits, which is what has empowered these moderators to do the blackout and go private.
A study by some computer scientists out of Northwestern University last year found that based on average pay rates, moderators do basically a minimum of $3.4 million per year in unpaid labor, maintaining the site and writing the rules, enforcing the rules, which is roughly equivalent to 2.8% of Reddit's 2019 revenue. Not an insubstantial amount of work is done. It's these moderators that often create the communities and maintain the site. It's been interesting to watch the dynamics between Steve Huffman and these moderators as who is trying to invoke the will of the people and the demos and the democratic bonafides.
Huffman is really trying to seize the mantle as the representative of the silent majority. He told NBC News in an interview that the moderators are the Landon Gentry, that the people who just got there first get to stay there and pass it down to their descendants. And that's not democratic. Just coincidentally, this is a time where he's announcing that he was going to change a bunch of rules to how you get rid of moderators on Subreddits so that they can be voted out by people that don't want them.
Alex Stamos:
What a coinkydink! This is a great week to roll that out.
Evelyn Douek:
How's that for timing? I mean, it's been super interesting to see how people are trying to invoke democratic norms and I guess a cost here that we're seeing of decentralized content moderation, where if you rely on your community to do moderation, you can't only give them say when and how it's convenient to you. They're going to have deep investment in emotional investment and other kinds of ideas about how the site should be run. I think that's been fascinating.
Alex Stamos:
The two sites that have always been... Whenever there's any problems in trust and safety, people bring up two sites as the example of you don't need to have trust and safety teams, look at Wikipedia and Reddit, which is, one, not true in that both the Wikimedia Foundation and Reddit have backstop trust and safety teams whose job it is to come up with the base policy and the base enforcement. You're not allowed to run a child exploitation Subreddit. In fact, there's a bunch of Subreddits you're not allowed anyway.
They've really raised the bar over the last five to six years, which I think was appropriate, because they got into a place where a bunch of Subreddits were really, really nasty and it was causing real risk for Reddit. One, you never got rid of that. I think there is a real positive of allowing people to have over that baseline different kinds of standards in different communities. You're in this place, people are kind of mean to each other and they're making fun of each other, and that's what you expect when you come here.
In this place, we're being very frank in our exchange of views on certain topics and that means that stuff that you might consider hate speech or slurs somewhere else will be accepted here. You don't have to be part of that community if you don't want. There are communities where stuff that you and I would consider disinformation is being freely spread and that's something you can allow in this situation. But the flip side is, is when you allow these people to take that kind of ownership.
He's not totally wrong in that both Wikipedia and Reddit creates a little bit of these Subreddit tyrants. Just like with Wikipedia, the people who just dedicate their lives to Wikipedia, create all these crazy rules that make it impossible for anybody new to come in and impossible to fix anything. I keep on running into this. Whenever I'm in a controversial situation, you get anonymous people editing my Wikipedia page. And then the rules are that I can't ask people to fix it, but random people from random IP addresses are allowed to put stuff that's not true in there.
It's because they've built this whole scaffolding of unless you're one of the chosen, you're not allowed to have any say of what goes in there, even if stuff's wrong and defamatory honestly. It's just the reality of when people are always look at Reddit and Wikipedia, this is the flip side is people are giving you their labor. And as a flip side, they believe they have some ownership. It's difficult to adjudicate then how do you reduce that and what control do they have outside of the things that you have specifically said.
They never told Subreddit moderators that you can control our API policy. That was never something Reddit promised them. It's not like Reddit is rolling back the powers that you get as a moderator. It is a completely different business decision that impacts them, but for which they had no official power. It's just a fascinating example of empowering those people that there are all kinds of benefits you can get. But in the end, they might not stay within their lane of the things in which you have explicitly them some influence.
Evelyn Douek:
I have to say though, allowing moderators to be easily voted out by a tyranny of the majority if they start making moderation decisions that people in a forum don't like, that also sounds like a recipe for disaster. I don't know exactly what the new rules are going to be, but it's a very easy... Part of the role of a moderator is to be a bit of a buzzkill and enforce rules sometimes in a way that is going to upset people.
Alex Stamos:
The challenge of Reddit is that you have Subreddits that hate each other and effectively raid each other. If you create a mechanism by which if you flood into another Subreddit and you are now the majority of people and there's no standard of whether or not you've been part of it or anything like that, then yes, you're going to end up with these trolling activities where you're like, oh, the knitting Subreddit is now a Nazi Subreddit because we decided that we're going to take over r/knitting and we're going to bring all of our friends and then replace them with our neo-Nazi moderators. I do think that that is a very risky direction to go. Yes.
Evelyn Douek:
Yeah, poor grandma. Anyway, I'm sure Huffman has...
Alex Stamos:
What happened to my knitting Subreddit?
Evelyn Douek:
Exactly. I'll just pick up this pattern and knit it. Oh my god, what is this?
Alex Stamos:
What is this pattern?
Evelyn Douek:
Anyway, I'm sure Huffman has thought through all of these difficult issues and is not at all just rolling this out in a pick of brain.
Alex Stamos:
This thing seems very well-planned. It's not like they're just having an emergency meeting every afternoon and then completely pivoting. To be frank, I don't think Huffman's going to survive this because I think this is not the kind of thing that investors are going to want to see as, oh boy, this person knows what they're doing. Whether or not you agree with the original goal here, just the way it's been rolled out and the way they've really floundered, it does not bode well for the leadership team.
Evelyn Douek:
I'm so glad to hear you say that because it's baffling to me. You're taking your inspiration from Musk in both the substance and form of your communications around this issue, but there's a really big difference, which is that Musk cannot be dethroned and can get away with this because of the situation.
Alex Stamos:
Again, Mush has lost two-thirds of the value of Twitter according to I think it was Fidelity. They had to mark-to-market their bonds or their loans. That's a crazy thing to put in your prospectus, in your S1, we want to IPO and lose two-thirds of your money. That's effectively what you're saying if you're trying to copy Twitter.
Evelyn Douek:
Right. The CEO is taking inspiration from this guy where we have a special sound effect to introduce the segment about his business decisions. The sounds of inspiring leadership. Over in the Twitter corner this week, Linda Yaccarino, the new CEO, published her first memo to employees and talking about how Twitter is on a mission to become the world's most accurate realtime information source, which is news to me.
I didn't realize it was on that mission given its latest policies and practices, but it's good to hear. Good luck, Linda. And exactly the same time that she's writing out this memo, Musk is tweeting about wanting modern day Roman dictators who are known for extremely violent takeovers. It's good brand safety.
Alex Stamos:
Not just any Roman dictators, right? He's not talking about Augustus or Julius or somebody who has a reasonable track record, even if they were terrible, it's Sulla, who is a straight... I mean, one of the worst figures in all of Roman history, which, of course, he knows because he linked to the Britannica article. You can read through all the horrible things he did. Anyway, he's basically saying he's given about democracy was what that tweet meant.
While people thought it was just funny for him to do that, it's a pretty aggressive indication that he does not want to operate within democratic principles. It is actually, I think, another sad indication of a significant portion of our polity deciding that democracy is not the way forward. That if they can't crush their enemies completely, then it is not worthwhile to operate within a democratic system. I find that very disturbing from a person who owns and controls such an important information platform.
Evelyn Douek:
Right. While Linda's saying we're trying to be the world's most accurate realtime information source and Musk is tweeting about Sulla and giving up on democracy, Jack Dorsey confirms what everyone already knew basically, but it was important to get it on the record, which is that during his time as CEO, India threatened to shut down Twitter if it didn't take down certain posts critical of the government. This is something that we've covered extensively on this podcast, the fight between Twitter and India and the increasing pressure that India is putting on Twitter to make it less accurate in terms of information by taking down information critical of the government.
In response, an Indian government official said that Twitter under Dorsey and his team had repeatedly violated Indian law, but that coincidentally, and it didn't name Musk, this statement, but said that Twitter had now been in compliance with Indian law since June 2022. The problems that caused Dorsey to get such threats no longer seem to be arising. Dispiriting, but again, not that surprising.
Alex Stamos:
It is sad, but good luck, Linda.
Evelyn Douek:
That's right. That takes us to the controversy du jour, which, of course, somehow Musk is also a part of.
Alex Stamos:
Because he inserts himself in everything, right? He's like, oh, somebody's in trouble somewhere? How can I be part of this? How can I jump into it?
Evelyn Douek:
Exactly. This is Joe Rogan and Spotify is the thing that's getting a lot of attention today. Joe Rogan had RFK Jr. on his show in the last few days, and it was a masterpiece of misinformation and false claims and bad medical, just plain out wrong information, which we're not going to go through because it's not important to our part of this, but there have been plenty of people on Twitter and elsewhere debunking all of these claims.
This show goes on, and Spotify basically has done nothing in response. Motherboard wrote about this, wrote an article about this. And when a microbiology professor tweeted it out, this critical Motherboard article, Joe Rogan has challenged the professor to a debate, which is exactly the way that we want to settle all of these.
Alex Stamos:
Well, to be clear, he said he wanted him to debate RFK.
Evelyn Douek:
Right, right, on his show, I believe.
Alex Stamos:
Unlimited time, which is a great thing to say of, hey, I'd love for you to come, and he said, for an unlimited time. What would it be? Seven hours or eight hours of debate? You're not going to convince RFK Jr. of anything. There's no winning against a guy like that.
Anyway, debate me. I will have the ring in which you gladiators fight and we'll make the money. I will toss the popcorn in and tossing the blood and take all the money from the spectators is not a bad offer to make. It totally makes logical sense for him to want to be that guy.
Evelyn Douek:
Right. In all of your studying of mis and disinformation on social media, Alex, do you think that it could have been solved much more easily if we just had professors debating random podcasters and politicians? Do you think that would've got us somewhere?
Alex Stamos:
Ugh, God. Okay. I mean, there's so many layers to this one. I don't want to get into the anti-vax stuff too much. I think there are legitimate complaints about how vaccine information has been handled specifically around the COVID vaccines. A number of projects, including ours, where specifically the goal was let's understand how do people come to grips with something so important in realtime? Our friend, Zeinab Chifeki, I think has written a lot of good stuff about there were claims and counterclaims and things were really confusing during COVID because we watched the science develop in realtime.
But there was these anti-vaxxers who this is not about COVID, this is not about realtime. They are making arguments against vaccines for which there are mountains and mountains of scientific evidence. Whatever happens in realtime just makes them believe that they were right, that it justifies everything they've done. One, the I think reasonable reaction to COVID restrictions and such, which have become politicized, the fact that that has opened the door for the traditional anti-vaxxers, the folks who don't want the basic vaccines that are necessary to send your kids to school because they want kids to die of diseases like Little House on the Prairie or something.
Like, oh, I love my kid to have measles, mumps. That sounds fantastic. The fact that it has opened the door for those people is really an incredibly sad thing. We see this in quantitative data that support for traditional vaccination has actually dropped in a number of political groups because of this fight. In this case specifically, I don't see this as a content moderation issue. It's not a content moderation issue because Joe Rogan is not just a user generate content poster on a platform. Spotify pays him $100 million to bring his show uniquely to their platform.
The relationship between Spotify and Joe Rogan is the same relationship as Netflix and the actors that are in their movies, between Fox News and Tucker Carlson it used to be, and between CNN and all their on-air talent, and NBC and everybody who's on MSNBC. That is the relationship. They are just paying a person to create content that they then sell ads against or subscriptions. Subscriptions pay us money to get this content. To me, they're absolutely 100% morally responsible for everything he says.
They're just straight up paying him. I don't see it as a content moderation issue. I don't see it as a Section 230 issue. This is just Spotify has decided that there're going to be a platform that carries this kind of stuff and whether there's actually any civil liability. It doesn't sound like there is. I don't think you think there is. What do you think, law professor?
Evelyn Douek:
I mean, I think this is tricky and possibly untested. Of course, Spotify will say Section 230 protects it, just it protects any other platform with content on its site. Now, Section 230 only protects websites for the content provided by other people, content produced by other people on their sites. The argument would be that by entering into this agreement, this is no longer just Joe Rogan's content, but that Spotify is some co-producer in the production of that content for all of the reasons that you've said.
Now, I don't know actually what the outcome would be there, but we talked a lot about how the Supreme Court was interested in revenue sharing ways of piercing Section 230 in the Gonzalez and Tumblr cases that we talked about a couple of weeks ago. That was something that they kept bringing up as a way that we might see a way through Section 230. I don't know the actual legal outcome in this particular case, and I think it would depend a lot also on potentially the terms of the agreement, which we haven't seen.
It's all private. I don't know. But I mean, for all of the reasons that you said morally, absolutely, it's a completely different situation to someone just posting content on a platform without any relationship with that platform in a more extensive way.
Alex Stamos:
I mean, if that's a valid legal defense, then Fox should have tried it in dominion of like, oh, we're just a platform upon which Tucker Carlson has a TV show, right? Because I mean, other than the format...
Evelyn Douek:
It's an exceptionalism, sadly.
Alex Stamos:
Except Fox is online. I mean, if Tucker Carlson's show was only on Fox Nation, if it never hit cable, that just seems like a bizarre... I mean, one, it's a bizarre reason to treat differently. I don't think legally it would've been treated differently. I think they would be held just as responsible. Anyway, I don't think there's actual liability here, but people should be mad at Spotify and their defense should not be, we allow lots of different users. It's like you're straight up paying this guy. You are making a ton of money off of having him exclusively on your platform.
This is what you chose to have. Whether you like the content or not, this is Spotify's choice. I think people need to vote with their wallet here. It's just like, if you don't want to watch Fox News, then you don't watch Fox News. If you don't like that Spotify just wants to pay for RFK Jr. to be up there and gives Joe Rogan a lot of money to let RFK Jr. just say things that aren't true, then you should not have a Spotify subscription. Sorry. It's a reasonable... This is not a platform issue. It is a not content moderation issue.
Evelyn Douek:
If all of this sounds a little bit familiar, it's because we went through it I think only a couple of years ago around Joe Rogan specifically and Spotify because he was in the news for making a whole bunch of misinformation claims at the time and discouraging people from getting vaccines. Spotify basically acknowledged some of its responsibility by after that controversy introducing a COVID misinformation policy.
A Spotify spokesperson pointed to that policy in response to comments for Motherboard and their defense was that this episode didn't get taken down because although Rogan and Kennedy suggested during their conversation that COVID vaccines are ineffective and injuring and killing large numbers of people, they didn't explicitly say that those vaccines were designed to create that outcome. It didn't fall short of their policy, which is okay... Yeah, exactly. Great loophole there.
Alex Stamos:
What kind of policy is that? If you're trying to protect people, if you believe that you're going to have a policy that's going to protect people, then should be about, are you giving people incorrect information that leads them to make the wrong decision? It is not whether you say what the purpose of the vac... You're saying, oh, it kills you, but it wasn't intentional. It doesn't change the outcome of what decisions people make. It's just such a bizarrely written policy if that's actually what they believe other than just trying to backfill whatever... More likely it's they've made a huge investment.
They probably do not have contractual terms that really allowed them to have control over this content. They made a pretty crazy decision to give him money in almost any circumstances. I think that's much more likely than this actually being a policy that's being applied correctly.
Evelyn Douek:
Right. Musk's part of this is that after the Baylor professor was challenged to the debate by Rogan, Rogan also proposed a $100,000 donation to the charity of Professor Hotez's choosing if he agreed. Musk said that Hotez hated charity by not accepting the debate and, of course, elevated this whole thing. In a way that it takes a dark turn, I mean, this is all sort of funny, but also really not because there's a lot of harm created by this.
Specifically, the professor tweeted that he's been now stalked in front of his house by a couple of anti-vaxxers who have turned up taunting him to debate RFK Jr. This is not the kind of thing that a microbiology professor should have to deal with, but thank you, Elon Musk, for creating that situation.
Alex Stamos:
Congrats, Spotify, if you end up with violence between folks, right? Texas is not a state where you should really show up on somebody's front lawn uninvited. The homeowner has lots of options with which they can respond to that for which they will not be punished very aggressively. If somebody gets seriously hurt or killed during this, then Spotify is not going to feel so great of saying that, oh, it's just a Section 230 issue. I mean, that's clearly not going to hold up in that situation.
Evelyn Douek:
Right. In other COVID-19 misinformation content moderation news, over to Meta, which this week published its responses to the Oversight Board's recommendation about its COVID-19 misinformation policies. Now, we talked about the Oversight Board's decision in this case a couple of months ago. This is all a complete mess. Meta referred the case to the board and the board took it in July 2022, so nearly a year ago. The board didn't announce its decision until April 2023, which is nine months later.
We talked about it at the time, but the board's decision after this nine months was basically a giant shruggy emoji to say, "Wow, COVID misinformation, that's really tough. You should assemble some experts and do a lot of consulting to try and work out what to do." After this nine months and this multimillion dollar board set up came down with this decision, just two weeks later, the WHO announced that it had formally lifted its designation of the global emergency in the pandemic.
Meta has said, "Well, given that WHO announcement and recategorization of COVID, we no longer want to enforce our global COVID-19 misinformation policy. We're doing a reassessment." It's all kind of a bit of a moot point anyway, given how long the board took to take its decision. The current situation is that Meta is saying that it's going to roll back its COVID-19 misinformation and it's going to go on a country by country basis. In those countries where they still have a public emergency declared, they will use escalation enforcement approach for COVID-19 when people flag it.
There is no list of the countries that Meta is proposing to do this in yet. I'm not exactly clear on what the timeline is or how it's going to be implemented or whether we're going to have much visibility or transparency around this, but basically as a result of this extensive process, where we end up is in a very haphazard opaque situation at the end of the day.
Alex Stamos:
I continue to not really understand what the Oversight Board is for. This is clearly exactly why you pay all of these people is to consult with them during this. I think part of it is going back to this original idea of treating them like court cases, where instead of treating the Oversight Board as more of an advisory board where they're deeply involved in the discussions, the fact that you toss something to them, give them evidence, and then you wait a year for the tablets to come down the mountain and to be boomed out in a voice from a burning bush, maybe it's better to have an ongoing engagement, because otherwise you end up with situations like this.
Meta changing their policy, the structure of COVID's impact on people has changed. It is reasonable to say we're not in the middle of the deepest part of the risk. I think also that initial period of what we were just talking about, Zeinab talks about, of the creation of knowledge of what is up with COVID, what's up with the vaccines and stuff. We're past that. Now there's still debates, but there's a lot of evidence out there. You might want to change what your policies look like. But why have something that's completely disconnected from what the Oversight Board does? It makes absolutely no sense.
Evelyn Douek:
Yeah, I completely agree with that, that I think this judicialized model is not an appropriate way of thinking about this. I guess what Meta gets from it is it can announce this new country by country designation and say, "We're doing this consistent with our Oversight Board," because the Oversight Board had said a global policy doesn't make sense in this circumstance where different countries are having different experiences. Technically, Meta's right. That at a very high level, it is just doing kind of what the Oversight Board told it to do.
It gets to say that in a 800 word Washington Post article, which doesn't really delve into how this process played out and what exactly happened, looks better, I guess, than Twitter, for example, which just silently stopped enforcing its COVID-19 policy. It kind of maybe looks a little bit more legitimate and makes Zuckerberg and Meta look good for doing it through this process rather than more arbitrarily.
But at the end of the day, I'm not sure in substance how different it really is. In other news, the board also released its 2022 annual report in the last week or so in which it said that it received nearly 1.3 million appeals over the 2022 period. During that period, it published 12 decisions. Not only do the cases take forever.
Alex Stamos:
They're almost there. They're almost...
Evelyn Douek:
One a month.
Alex Stamos:
Right. 1,299,988 left to get through the background.
Evelyn Douek:
That's right. It'll be fine. Just a few weekends and you'll get there. I feel a little bit like the person that's complaining that the food is bad and the portions are too small. But still, I do think that it's ridiculous to only publish 12 decisions over a 12-month period.
Alex Stamos:
I mean, the ridiculous part was to advertise, we're here for individual appeals. That was never the job of the Oversight Board. The job was for the thing we were just talking about, hey, there's a big argument over what COVID policy should be, then let's have a big supreme court like process by which you argue over it. It would be better if that was transparent. It would be better if you had individual steps. But at least that makes sense.
Of them to be looking at a random person got their Facebook post taken down and they're going to appeal it, that's just ridiculous. You could never structure any kind of body like this to handle that. You have policies in place that are then enforced by thousands or tens of thousands of operational people and AI and all kinds of automation to be anywhere close to that volume.
Evelyn Douek:
Right. I mean, to be fair to them, when they take these cases, those 12 decisions, they do try and push for systemic changes. They aren't really focused on the individual piece of content in question. But at the same time, you cannot do that once a month chiming in and saying, "Hey, think about it harder." Anyway.
Alex Stamos:
I feel like that whole thing comes from the over American lawyerization of the Oversight Board because it's all about having standing in court. You had to have a content taken down to "have standing" to be in front of the Oversight Board, where what should the COVID disinformation policy be. It is not something that you need individual person to appeal to know, wow, this is important.
What your policy should be in the 2024 election is something that clearly the Oversight Board should be engaging with the company on whether or not somebody's appealing something. But I see that as the problem is that every single person involved was standing this thing up had a JD from an American law school. And therefore, all civil procedure is what infected the thought process for setting up something that could have been really unique and useful.
Evelyn Douek:
I can't help myself, I've been biting my tongue, but the last five minutes or so of conversation, we have been recreating my argument in an article, Content Moderation as Systems Thinking, where I basically argue exactly that, that lawyers and especially American lawyers, when you see a speech dispute, you think, ah, court case. You need a plaintiff and a rule and a judge.
Especially with speech in the First Amendment, you think you have to decide on these individualistic rights terms. That's just an inappropriate analogy when you're talking about the scale and all of the trade-offs that are involved in content moderation, which make it a completely different kind of problem.
Alex Stamos:
Gosh, did you publish that anywhere, Evelyn?
Evelyn Douek:
Yeah, I think so. You can Google it. It comes up.
Alex Stamos:
I think it's a law review for an inferior school in Boston.
Evelyn Douek:
That's right. Somewhere in Boston. Thanks for the plug. Speaking of law and lawyers, let's head to the legal corner. Thank you. Just a few quick updates from the states, which are really buzzing with activity basically in regulating platforms, and I'd even missed a couple of these. Last week, Texas signed into law a bill requiring kids under 18 to get parental consent before joining a wide variety of social media sites. Kudos to The Verge for covering this because it was barely covered in mainstream press.
I only found it when I was looking for information on a Louisiana bill that had passed a few weeks ago that would do the same thing. But the Texas Act is broader. The Louisiana bill only has been sent to the governor for signature and would only require... It's a pretty simple bill just requiring parental consent for minors to have a social media account. But the Texas bill has all of these other provisions around requiring platforms to prevent minors from seeing "harmful material," which is defined not completely at large, but pretty broadly to include a whole bunch of stuff, sexual material, et cetera.
It requires platforms to use filtering technology and hash sharing technology and all of these other ill-defined terms that are just thrown into this act to prevent minors from seeing this kind of thing. It's a pretty expansive bill and it unsurprisingly has attracted the attention of the industry group NetChoice, which said that the law violates the First Amendment many times over. They haven't challenged it yet as far as I can see, but I doubt a challenge will be far off. It can add it to the NetChoice uvra of cases that they're bringing.
Alex Stamos:
Yeah, they're really racking it up. It's going to be NetChoice versus every attorney general in their professional capacity in the United States. I guess these states have just decided that if there's no downside to having a signaling bill, because you're never going to actually have any consequences because whatever you do is immediately going to be stayed by a federal judge, then you might as well just do it so you can say that you're holding them accountable and keeping families safe, or keeping kids safe, or giving parents choice, or whatever it is that you say your slogan is that that's going to become the standard thing.
Unfortunately, I think if you have 45 of these bills, one of them will actually survive, and then we're going to end up with all these crazy consequences like one state in which you have to show ID to get an internet account or something. I mean, it's a huge waste of money for all the litigation that's happening and just a completely ridiculous way. But I understand the motivation of all of these governors and state legislatures to try to at least put their fingerprint on it so that they have something to advertise.
Evelyn Douek:
Yeah, right. It's like a DDoS attack of state legislation of terrible ideas and one's going to get through. I've been joking that teaching First Amendment in five or six years, it's just going to be teaching the NetChoice cases one through seven because they're going to include all of our free speech principles for a digital age as NetChoice goes around challenging all of these bills, including there's another one in Florida that was signed into law by Governor Ron DeSantis this week, the Florida Digital Bill of Rights. Alex, it's exactly as you say, this is political posturing and signaling.
It includes some basic consumer rights and privacy rights, but it only applies to the largest tech platforms and also includes a couple of other things like prohibiting government officials in the state from making requests to social media platforms to remove content and also requiring search engines to say if their search results are influenced by political partisanship or political ideology, the number one issue in content moderation and online safety these days for sure. Yes, when you've got a political campaign going these days, the first step is to announce some platform regulation bill.
Alex Stamos:
Maybe you can get famous by writing the first restatement of the NetChoice cases. It'll be the have to be book on the shelf of every First Amendment lawyer in 2037.
Evelyn Douek:
Excellent. That is my 10-year plan. You heard it here first, folks. The first restatement of the NetChoice cases. And that brings us to the end of our program today. It's been your weekly Moderated Content update. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent.
This episode wouldn't be possible without the research and editorial assistance of John Perrino, policy analyst at the Stanford Internet Observatory, and is produced by the wonderful Brian Pelletier. Special thanks also to Justin Fu and Rob Huffman.