Alex and Evelyn discuss the Blue App's decline, YouTube's new experiment in content moderation, a US military-run information operation, the Surgeon-General's call for a warning on social media, and NY's law restricting algorithmic feeds for minors.
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:
Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.
Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.
Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Alex Stamos: What summer is it? So we're through Hot Girl summer. Is this a hot COVID summer again, because COVID is back?
Evelyn Douek: Supreme Court Summer for me.
Alex Stamos: Supreme Court Summer.
Evelyn Douek: I don't know.
Alex Stamos: Well, no, for you it's Supreme Court Winter, because it's like you're still using this for the Southern Hemisphere standard here. Is this crazy for you that everybody calls it summer? I mean, although your winter's pretty warm, it still feels like our summer.
Evelyn Douek: Yeah. Although the problem, I've never been so cold as I am in a Sydney winter because we just don't believe that it gets cold and we therefore do not prepare for it to get cold. And so we insist that it's not cold. However, we do feel cold.
Alex Stamos: It's like when you look at other people and they're richer than you and you suddenly feel poor. Is that in Sydney, the rest of the anglophone world, it's just you guys and New Zealand and then everybody else is in summer. So all you're seeing is the summer beach movies and the summer TV and you're watching the BBC and it's sweltering hot in London. And so the fact that it's whenever 12 degrees centigrade, you guys feel like you're dying.
Evelyn Douek: Right, yeah. Feeling left out. It's cold both socially and temperature wise. So it is rough down there.
Alex Stamos: But everything's starting to kill you, which you always have to point out. So over to those things, maybe you go in the winter, our summer because everything's hibernated.
Evelyn Douek: That's right. Exactly.
Alex Stamos: All the deadly stuff, okay.
Evelyn Douek: Exactly. That is the one benefit of the Australian winter is the snakes and the spiders and the-
Alex Stamos: Death koalas, yeah.
Evelyn Douek: Exactly, the Dan Koalas, the drop bears, which are the thing of nightmares. They're all having a nice nap. Welcome to Moderated Content, stochastically released slightly random and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. It is summertime, which for me means one thing, which is that I'm on its tend to hooks waiting for the big platform cases that the Supreme Court is about to hand down any day now, surely. Every opinion day I sit with my coffee, refreshing the SCOTUSblog, waiting for whether the Supreme Court is going to upend the internet and the First Amendment as we know it, and every day they disappoint me.
So you and I, Alex, we were planning on maybe doing a podcast last week and we kept pushing it back, pushing it back, pushing it back because the Supreme Court is being hard to get on this one. So that's good fun. Okay but we are now very close to the self-imposed deadline for the Supreme Court at the end of June. So hopefully by this week, and maybe even by the time you're listening to this, we will have the Supreme Court opinions in the net trace cases and in Murthy V. Missouri, which obviously listeners to this pod will know well and we will bring you breaking coverage of those decisions once they come down. But for now-
Alex Stamos: So you've heard it here officially, it's hot First Amendment summer.
Evelyn Douek: That's it. That's what all the cool kids are calling it for sure. All right, platform palooza Summer starting any day now. But in the meantime we have our usual normal news update episode for you and a bunch of odds and ends to cover. Now Alex, one story that we want to start with was a story that caught your eye from 404 Media that had a story with the headline, Has Facebook Stopped Trying? Now Betteridge's Law of Headline states that any headline that ends in a question mark needs to be answered with the word no. So has Facebook stopped trying, no or yes? What's your take on this one?
Alex Stamos: It's a little bit of a click baity headline, but this is a good story by Jason Koebler at the 404 and I think he consolidates what a lot of people have been feeling in the space, which is things are getting really bad, especially on the Big Blue App as we called it, Facebook, but on a number of Meta platforms in a variety of different trust and safety ways. First is just the spam, so our colleagues at SIO, including Renee DeResta, I thought had a really good exploration of the shrimp Jesus phenomena, which is the specific image they're focused on. But there's this overall phenomena of a huge amount of AI generated click bait that's just trying to engagement farm. And you see a ton of this on the Big Blue App. Now, I don't spend a ton of time on Facebook, but it's still the way if I want to see, especially my high school friends, what they're up to, updates on their families and stuff. It's a good way to stay in touch.
And my God, if you click off of just your friends, you find so much crap. And so the spam problem has gone huge and as we've discussed many times, spam is the water that finds all the cracks usually hidden in any spam problem you have all kinds of other abuses. And then you have the more specific things that people are talking a lot about, financial extortion being a humongous problem and it's driving a lot of the legislation that we're going to be talking about today. And so I think there's actually two things going on. One, it does feel to me that Meta is giving up a little bit on the Big Blue App and it's just not getting the ratio of the amount of work that was supposed to be going into it. There have been significant cutbacks in trust and safety overall at Facebook.
There's been a bunch of layoffs, and I think we might be seeing the downstream effect of both the layoffs overall, but then the ratio of how many people are put onto each platform and the future of Facebook is clearly Instagram and free LLMs, for which I don't have any idea of how they're going to make money and Oculus, which loses a ton of money, but clearly Instagram and WhatsApp are the money makers now. And so the Big Blue App, despite still generating a lot of revenue, is not seen as the future. But then second, I think even before the layoffs, there was a shift inside of Facebook away from human investigations and towards what euphemistically is called at scale responses, which usually means things that are automated.
And that is I think the core around a lot of the sextortion issues is when you're dealing with intelligent adversaries who are financially motivated, they will change what they're doing over and over and over again to stay ahead of the robots. And we've talked a lot about this on the show. We had a big paper on child sexual abuse material on Instagram. It is a real problem right now and I think they still have not gotten on their arms around the size and scope of that issue. So I wouldn't say Facebook's given up, but I think we have seen a significant disinvestment in trust and safety work and it's making all of these different Facebook products worse.
Evelyn Douek: Yeah. I just want to note the subtle burn there of when you're describing the future of Meta, the company threads didn't even warrant to mention in terms of its Oculus before we even get to threads. So that's saying something unintentionally. I think.
Alex Stamos: I'll mention it as a Threads user. It's actually the engagement farming, it's really, you don't see the CSAM and stuff, but man, the spam of basically ripping off other people's images, other people's videos from TikTok and YouTube and stuff and just trying to engagement farm is really bad on Threads for sure.
Evelyn Douek: Yeah, the signal-to-noise ratio was just not right for me. And it means that I unintentionally just find myself not opening the platform very often, even though I would notionally be exactly the target market for an app like that to replace what Twitter was. It just wasn't giving me the dopamine hits that I wanted apparently. So yes, that's Threads and yeah, it was a good story. I think it reflected exactly as you said, a sentiment that a lot of people are feeling, which is that there is something different about trust and safety in this moment, a lot of disinvestment, and we're seeing that across a bunch of platforms. The story speculates that one contributing factor here for Meta might have been Musk, in that Musk lowering the bar so far meant that as long as you weren't overtly platforming and retweeting Nazis and telling advertisers to go (Beep) themselves, you are the responsible platform by comparison. So yeah.
Alex Stamos: Ken Lyon just happened, which is when all of the ad salespeople are all together in all of the platforms, both traditional media and new media slash social media, and Musk was out there trying to undo everything he did, and it was just hilarious of like, "Well, there was some subtlety to how I told you to go F yourself." But you're right, the bar is so low that as long as Mark Zuckerberg isn't personally amplifying white supremacists and personally going up to CEOs of major Fortune 500 advertisers and cussing them out to their face, so cutting 10% or 20% of your trust and safety team, that's nothing [inaudible 00:08:03], it's no big deal. Yeah. But it's unfortunate because I think it is going to... The board at Facebook is a weird thing, it is a publicly traded company in which Mark Zuckerberg still controls not a majority of the actual economic benefit, but a majority of the shares, the voting shares. And so you have this board that theoretically can fire Zuck, in practice both Zuck can fire the board and the board can fire Zuck. It's a really weird corporate structure.
And if I was on the board, which is unlikely to happen, let's just say, but if I was on the board of directors of Facebook, I'd be asking some serious questions because I do think it will have a long-term impact on both where advertisers want to spend their money, but time spent because it's just becoming a bad experience to be on. And we talk a lot about how companies want to do trust and safety, not because it's legally required, not because they get threatened, but because in the long run it makes the product better and therefore people spend time on it and therefore they sell more ads. And I do think they're not really quite in a death spiral yet, but Facebook is facing a direction in which you can see a serious drop in the usefulness of these products for people.
Evelyn Douek: Yeah, I mean it'll be interesting to see. I mean we have seen a cyclical nature of all of this, and a lot of the story was talking about how Facebook had been rolling out all of these initiatives in 2018, which was in response to the tech lash, the backlash against perceived failures in 2016. 2018 was the high moment of rolling out all of these blog posts and the oversight board and everything to try and deal with the fake news PR crisis. And then obviously in the wake of the 2020 election, we've had the backlash to that initiative. And so it'll be interesting to see going into this election whether there is a backlash to the backlash to the tech lash and whether we see a requirement to reinvest if there's a whole bunch of catastrophic failures or whether this is the new normal.
Alex Stamos: Yeah. And I mean the problem here too is individual personnel movements of certain people who, if they're VPs or SVPs inside of a company like that actually matters. And so I don't think it's a big plan as much as they've done layoffs, they have some people in charge who believe you can do stuff at scale that you don't need the investigators. And then honestly, we're seeing the downside of the Francis Haugan stuff, which my hot take has always been that There's a lot of interesting things in Haugan's documents. Most of the documents she leaked are people just doing their job and the media backlash against Facebook because of those leaks has destroyed the ability for any trust and safety team to do internal research ever again.
Nobody including Facebook will ever invest in trust and safety in the same way they did pre Haugan and she single-handedly basically ended an entire era. And you can believe what you want if that was a good thing or a bad thing. But I do think this is one of the natural consequences. If you create a situation where you can't have the trust and safety team be honest internally and have any discussion internally about how things are breaking, then the outcome is that they're much less effective and or you have direct layoffs of quantitative social scientists who might possibly create stuff that says, "Hey, we have a problem. We need to fix it because you're going to end up in the front page of the Wall Street Journal."
Evelyn Douek: Yeah, so one response to the backlash has been this rolling back of trust and safety initiatives. Another has been what we've seen from a couple of platforms now is outsourcing it to users or moving it away from centralized content moderation. And the first platform to do this in the most significant way was Twitter and now X's community notes feature. And this week YouTube has announced that it is rolling out something similar. So it's testing an experimental feature to allow people to add notes, to provide context to videos.
So the examples that it gives so that you can clarify that a song is meant to be a parody or point out when a new version of a product is available or let viewers know when older footage is mistakenly portrayed as a current event. Something along those lines. I mean this is something that I actually see quite a lot of on X these days with the community notes features. And so obviously YouTube thinks there's something beneficial there to replicate because it's rolling out its own version, and I really don't have a good sense of what to expect here or how to think about this. What's your take on this one, Alex?
Alex Stamos: Yeah, this is great. I mean, community notes is one of the best things old Twitter did that Musk has not killed partially because it doesn't require that much money, I think, to operate it. And so in laying off a ton of Twitter people, it had already been built and used have to keep it running. One of the reasons it works so well is it's based upon actually a really complicated algorithm that keeps reputation scores for Community Notes writers that tracks whether or not they're writing notes that are of interest to people of multiple backgrounds. So effectively it clusters your relationship because one of the fears of anything like Community Notes is they'll be begraded. That you'll have people that everything they disagree with politically will end up with a community note.
And all the people in that same group will vote up that community note and vote down everything else, which is probably effectively what happens. But it doesn't get reflected because the algorithm sees is this interconnected group of people that agree with each other all the time? Are they all voting up on this note? And so the notes that get widespread support actually get surfaced on Twitter, which was a really interesting model and it's actually worked out really well. YouTube has not documented in detail, but they specifically say we're using bridging based algorithms, which is the same class of algorithm as what Twitter uses, not exactly the same, but the same class of algorithms.
And so it will be fascinating to see if theirs is as effective at Twitter. The other thing that was interesting about this is that you have to have a YouTube channel. So there's a big difference, if you're on Twitter and you're tweeting every once in a while, that's a lot less work than being like, "I have a YouTube channel." And so that is an interesting self-selection that once again highlights YouTube's relationship with their content creators is incredibly, incredibly important for them. And so it is not inconsistent with that, but I do think that that is an interesting change of the subset of people might be quite different than the viewers that the big diverse viewership will be very different than the much smaller group of creators who could possibly vote on these things.
Evelyn Douek: Yeah. So it'll be interesting to see how this works out in the lead up to the election. It'll also be interesting to see whether this is transitioning to be the primary tool that YouTube is going to be relying on to deal with election integrity issues or whether it's going to be one tool amongst many, just an additional feature alongside the stuff that they did last election. That'll be something to watch. Of course, I won't hold my breath for a lot of transparency about it's being rolled out or how effective it is, but for anyone listening that might have a say in this data about effectiveness and what you're seeing would be super, super useful in this space. And so it's a small plea to make that.
Alex Stamos: And Evelyn believes a really smart thing is to send one of your executives to testify in Congress.
Evelyn Douek: That's exactly right. Preferably the CEO any day now. Any day. This year's the year.
Alex Stamos: What comes first, this Supreme Court cases or yeah both?
Evelyn Douek: YouTube CEO to Congress 2024. It's going to be my year.
Alex Stamos: This is going to become this running joke where you're going to retire, you're in your first year of teaching and you're going to have an entire career as an academic.
Evelyn Douek: They'll never have called them. Yes, exactly. Well, we will see. Okay, another story squarely in our wheelhouse this last week was a big story from Reuters about a US military run information operation. The narrative that was being spun was to discredit China's COVID vaccine and in particular targeted at the Philippines. We've talked including on last episode with Renee DeResta about Pentagon operations before, but this was an exclusive and new expose about this particular campaign. Don't worry, the campaign didn't target Americans at all because that would be bad, and that's not something that the US military does, but it was targeting the nation of the Philippines, which had a particularly low inoculation rate, one of the lowest in Southeast Asia. And there's research that suggests that narratives sewing doubt in a vaccine increases vaccine hesitancy overall, so huge public health ramifications from this narrative, which the story says a lot of public health experts were shocked and dismayed to hear that the military had been doing this. But yeah, curious Alex, do you have a contrarian take on this one or is it just bad all the way down?
Alex Stamos: It's super bad, the United States of America should not be doing this. So I understand the context and the context here, which we wrote about. If you go to io.sanford.edu, you can read all about how China was utilizing COVID and disinformation during COVID for a variety of reasons to move blame of COVID away from them, to blame the United States and to plant these ideas that COVID was a US bio weapon and stuff like that. And to attack American made vaccines, we should absolutely not respond. We are the good guys. I just have to repeat that. Somebody needs to go to the Pentagon and shake them and be like, "We are supposed to be the good guys." We are the democracy and we should not absolutely should not be using disinformation tactics to spread any kind of anti-VAX communications at all, but certainly not in this way where you go to a desperately poor country and you convince them that not to take a Chinese vaccine because pushing an American vaccine or because you're responding to Chinese disinformation.
Evelyn Douek: Yeah.
Alex Stamos: It's absolutely freaking terrible. It makes me so angry. Now, I think what we still don't know is, is any of this stuff going to happen in the future because of our work before? So Facebook found a bunch of Pentagon disinformation, we analyzed it and did attribution. The Washington Post wrote a great story because we don't have the ability just to call people in the Pentagon and find out whether they actually paid for it. And so we did everything we could around attribution. But then the Washington Post was able to figure out very quickly that this was a government contractor that did it. And because of that, there was a review by one of our Stanford colleagues who was the Under Secretary of Defense at the time into the policies that the Pentagon is supposed to have.
And so I think we deserve an answer from the administration what is now in the Pentagon? Not five years ago, 10 years ago when some of these programs were stood up, some of these programs actually stood up during the Bush administration. They're like Afghanistan, Iraq era disinformation programs. And certainly this one was mostly during Trump, what the Biden administration should say that the United States has come up with a policy that the Pentagon will not be doing this ever again because our taxpayer dollars should not be used for this kind of disinformation. It hurts people and it hurts our standing in the world, and it makes it so much harder as a country for us to stand against the people's Republic of China, especially Russia, Iran, when they go and they do these campaigns against our citizens or to try to hurt American interests around them.
Evelyn Douek: Yeah, so absolutely, I couldn't believe that the official response actually was exactly as you said, "They started it." A Pentagon spokeswoman to Reuters said that she noted that the US military uses a variety of platforms to counter malign influence operations, and then she also noted that China had started an information campaign to falsely blame the United States for the spread of COVID-19. So they started it. It's not our fault, which yeah, exactly. Get me a tissue. It's a bad look and not a good way to go about these things.
Alex Stamos: I'm sorry you buckle up and you be the good guys. This is not how we won World War II with, "Oh, they started it." We went and we kicked butt within the context of what was appropriate for the United States and the rules of the United States was going to play. This drives me insane. This is just, sorry to get angry, but the United States should not be doing this. It sets back everything that has been done globally to try to counter disinformation online, to counter government influence operations, and in the long run, a world in which every government is manipulating everybody else's citizenry with online lies is not a good one for democracies. That is a good future for autocrats. It is a good future of the Chinese Communist Party. It is not a good future for the United States of America.
Evelyn Douek: Yeah, couldn't agree more strongly. So the US government comes out looking really, really bad from this. But the other thing that I wanted to pick up from this story is one of the things that Reuters was reporting was that a bunch of social media executives were having meetings with the administration saying, "Look, we found these campaigns. They're not particularly well hidden. They're not particularly subtle. They're not doing a good job. We found them cut it out." So I mean, good for them for raising this issue. However, the question, and we were talking about this before we started taping, was, hold on, did they publicly report any of these campaigns that they found? Because we are now in this era where these platforms release these periodic reports about information operations that they find on their platforms and remove, and I don't remember seeing this one pop up in any of those reports, which raises serious questions about the comprehensiveness and bias of some of those reports.
Alex Stamos: No, they have not shown up in those reports, nor have they showed up in the databases that this data is archived in. Now, we don't have the quarterly report that covers this one yet, and so it is quite possible we'll see it. And so I will withhold judgment, but I do hope we see this in Facebook's report. Twitter, of course, just threw away doing this, because being free speech means you let the Iranians and the Chinese to get away with massively manipulating Twitter. But Meta is the theoretically putting out these reports, and so it should absolutely include this information operation. I hope to see it in the next one.
Evelyn Douek: Okay. Well, we will see. I mean, some of these reports are many years old now and some of the meetings were many years ago. And so I think it is discouraging to say the least.
Alex Stamos: It's discouraging because we've also seen the same thing with India, we've seen Indian influence operations not make it into these reports. Now in that situation, you might have the argument that you're trying to keep your Indian employees out of jail, but if Facebook writes this up, no American is going to go to jail. That is the great part about living in this country in theory, who knows in the whatever next administration we have, but as of right now, it is totally safe for Facebook to write this without any of the key people there going to jail. And so I do hope they write this up.
Evelyn Douek: For the transparency and transparency reports to be meaningful. It has to be also comprehensive, not just limited spotlights onto what they want us to see. Okay. The other big headline in our space over the past couple of weeks, I'm sure all of our listeners will have heard about this, which is the Surgeon General's call to publish a warning label on social media platforms that advises parents that using the platforms might damage teens mental health. This is something that builds on a number of advisories or comments that the Surgeon General has made about social media and in particular, an advisory that he released last year about...
Well, actually the advisory wasn't one of the most alarmist about social media. It didn't say, "Look, there's definitive proof about the evil of these platforms." It was pretty candid about the fact that there wasn't adequate evidence that social media was safe. But that is very different to saying that the analogy that the Surgeon General is drawing in this op-ed that he published in the New York Times calling for warning labels, that social media is like cigarettes, and we have warning labels on cigarette packets, and so we should have equivalent warning labels on social media platforms. I have a lot of thoughts about this.
Alex Stamos: Are cigarettes speech, Evelyn?
Evelyn Douek: Yeah.
Alex Stamos: Is that how that works?
Evelyn Douek: Yeah, exactly. Speech of all kinds is exactly like tar that you breathe into your lungs. I mean, it's hard to overemphasize how antithetical this idea that we should act in the absence of evidence because this might be bad, is to how we think about First amendment and free speech in general. So the Surgeon General starts with this statement that one of the most important lessons I learned in medical school was that in an emergency, you don't have the luxury to wait for perfect information. You assess the available facts, you use your best judgment and you act quickly. And the history of American free speech law is like we did that for a while. It turned out to be really, really bad because you locked up a bunch of people that were saying unpopular political things, but probably really shouldn't have been locked up.
And then we realized, "Oh, that's not a good way to think about free speech." We should definitely wait until you can't lock people up just because speech might have a tendency or might in some cases lead to bad outcomes, we need to have a much higher standard than that. So it's like as a free speech scholar, reading those words as an argument for having these warning labels on social media platforms, it's said it all to my mind about the reasons why this isn't a compelling argument. But yes, I mean that's coming from the free speech perspective. Curious, Alex, what you thought of this one?
Alex Stamos: Look, you see I am not an expert on the overall impact on the psychology of children of social media. It's obviously as a parent, it's a very difficult thing that all parents struggle with these days, but who is an expert is our colleague Jeff Hancock, who's an actual trained psychologist and was one of a very August large committee of psychologists, psychiatrists, pediatricians who the National Academy of Science pulled together to look at this question, and they looked at all of the research of which there's a significant amount, and they wrote a 270-page, five-page report. And basically, what's that 275 page report say? It says it's complicated. It says there are situations where social media is good for kids and there's situations which it's bad, and I do recommend the report.
They also have a one pager, which is great because it's got just short recommendations that if you're a parent, how can you take the science and make things better? How do you set rules? How do you look for problematic use? Which is great, but I do recommend anybody who listens to this podcast should read the National Academy report or should at least read through it. Now, the section in there, there's a whole section in there about actual adversarial hurt of children. And that's something that I do have some expertise in, unfortunately, and in that area, it is terrible, but that is not about just having a phone and putting a warning label on social media is bad for your health is not going to help you with that. There are ways that you could train kids around sextortion around cyber bullying and such.
And so I think if the Surgeon General really cared about these real abuses, there's a lot of things in which the government could set the standard and push forward and jawbone honestly the platforms to do better education. Because things like sextortion, if you educate kids about what that looks like and what the outcome is and then how to get an adult to help you out, then that could slow down the speed at which these abusers are able to extort kids, but also hopefully mitigate some of the most terrible outcomes of which there have been many. And so focusing on specific areas is something you could actually do. The overall warning I think will do nothing, and it is not backed by the science. And so the thing that really angers me here as we're coming out of COVID in which one of the real problems we had is that the science was being created in real time.
And everybody was watching that on social media and was dissecting every single paper that came out. Every single new thing that came out was being dissected by non-experts. And one of the things that really caused long-term problems for our society was there situations in which public health officials were not totally honest. Things like, don't buy a mask, which they didn't say, don't buy a mask because masks didn't. They said, don't buy a mask because they wanted to save N95s for first responders. So instead of saying, "Please don't buy all the N95s, we need them right now for first responders." They said things like, "Ah, masks might not work or whatever." And then when you turn around and tell people, "Buy a mask." Then they don't believe you.
For the Surgeon General to go out there and to utilize pseudoscience like this when it's super complicated and to say it's simple, it is politically motivated lying and it once again politicizes public health at a very terrible time to do it. I think that's what really anchors me right here, because it feels like Surgeon General has not learned from what happened during COVID, which one of the lessons to me of COVID is that institutions need to be as completely honest as possible. You cannot tell little white lies because you think it is in people's best interest. And that's what I think here. I think the truth is that this is a very complicated topic. Instead of saying, "It's complicated and this is the ways we're going to address it." He wants to have a nice, simple politically valent solution.
Evelyn Douek: Yeah. I mean, he would say, I think that he's being candid about the fact that the science isn't settled. He is more than some others acknowledging that this is not settled or that it's complicated. But very much the vibe of the op-Ed and of the interviews that he's done has been exactly as you said, tapping into that emotional reaction that you have around kids and saying, "Look, but we can't wait until it is settled. We can't wait that long." The moral test of a society is how it protects its kids, and using that kind of language is very clearly intended to galvanize a particular kind of reaction. And I agree, it's disappointing and discouraging to see. So it's bad policy, it would likely not be effective because I mean, yes, like a warning label that you have to click through that says, okay, extended use on this platform may cause harm, is not going to dissuade anyone, I don't think. Third, it's unlikely to happen because a Surgeon General can't do this unilaterally.
It requires an act of Congress and that just, well, it is more likely, I guess, in this area than many others. There does seem to be bipartisan coalitions around kids online safety that there aren't in pretty much any other area. So I guess this might be one of the rare areas where there might be action, but even if it were to get through, this is almost certainly unconstitutional. And that's because the First Amendment does not only prevent the government from shutting you up when it doesn't want you to speak, but it also prevents the government from forcing you to speak when you don't want to. It's compelled speech. That's the technical term for it in the doctrine. And there's very real reasons why this is important. You might think that it's nowhere near as bad when the government forces you to speak as when it censors you, but there's very obvious ways that this can be politicized.
The government can force you to say things that you don't want to have anything to do with. And a good example comes out of a recent case from our favorite court, the Fifth Circuit, the court that you might remember from such hits such as upholding the Texas social media law and the Jawboning claim that are both on appeal at the Supreme Court that we're on tender hooks for any second now, and which also upheld Texas's age verification law for adult websites. That court, this kind of compelled speech was a bridge too far even for the Fifth Circuit because it recently struck down Texas's attempt to make adult sites display health warnings about pornography.
So it was in this Age Verification Bill that Texas also would've required sites to have huge warnings when they first open up that says, "Texas Health and Human Services warning, pornography is potentially biologically addictive. It is proven to harm human brain development, desensitized brain reward circuits, et cetera, et cetera." That kind of language. And there are a bunch of different such warnings that Texas was going to compel these adult sites to show. And the court said, look, even though the court upheld the age verification portion of the law, it said that the state of scientific research about the effects of pornography was too up in the air to support the warnings in this case.
And that you need something far more settled in order to get the most relaxed standard of scrutiny to justify this kind of extensive requirement that clearly these providers don't want to have to say. So if it's a bridge too far for the Fifth Circuit, and in the much more limited case of adult websites and displaying it to children, I can't think that you're going to have many cause sympathetic to the claim that generally social media websites, that there's enough settled science to support this warning screen in those cases. So I guess that's the comforting thing there is that even if it does get passed, it's almost certainly unconstitutional.
Alex Stamos: Well, it's comforting, it's another 300 pages, so you could write in a law review article one day.
Evelyn Douek: Right. Exactly.
Alex Stamos: What should be covering for you is that you're never going to be out of work.
Evelyn Douek: Completely.
Alex Stamos: Your area of law is definitely not disappearing.
Evelyn Douek: Well, this is an excellent segue about the full employment program for lawyers and academics working in this space to. The other law passed this week that is no doubt going to spawn a bunch of lawsuits, which is New York's Stop Addictive Feeds Exploitation Safe for Kids Act coming out of the New York legislature this week, which bans addictive feeds, which means any feed essentially that uses algorithms to recommend, select or prioritize for users, anything based on user information or information associated with the user or the user's device. So essentially it wants chronological feeds and it requires platforms to either not have those for minors or require parental consent for algorithmic feeds.
And it also makes unlawful notifications between the hours of 12:00 AM and 6:00 AM Eastern which is an interesting, only, only Eastern time, the hegemony of Eastern Time continues for miners. So a bunch of things in this law, which is it's making chronological feeds.. First of all, I mean the central issue here is that it does require age verification because the way that a platform has to work out whether this user can be given an algorithmic feed or needs to be given a chronological feed is the use of commercially available age verification tools. And as we've talked about many times on this podcast, there is just no way to do that that is either privacy protecting or feasible or it turns out constitutional. So just another unconstitutional law coming out of another state this week.
Alex Stamos: Yeah, I just want to point out the age verification thing is that we keep on having states just jump over the hardest part of all this, and just say, "Well, we'll figure out age verification later." And then they jump to regulating porn or regulating algorithmic feeds or regulating privacy or data collection. You can't jump over that. That is a huge problem. Knowing that somebody is not a cat on the internet is a fundamental issue with the internet. And the solutions that actually work are mostly authoritarian ones. They're mostly you have to show a government ID to get a phone or to get an account or to sign up or you have to turn over a lot of data and for anything that has any kind of privacy purpose to argue that you;re privacy preserving by requiring somebody to show a government ID to get online, it does not make much sense.
And so I think there's a couple of things here. One, we're going to have to come up with some kind of, if not firm age verification, age gating that is privacy protecting if we're going to want any of these things to work. And so we have to start there. And that's not something you can fix on a single state level. That's going to have to be a federal and international thing. Second, the algorithmic thing, it just demonstrates how much this legislation is just driven by not great journalism is that for whatever reason, people think the two things that are hurting kids more than anything else are privacy issues, that their information is being collected to show them ads and algorithmic feeds. And it's because those are the things that you find about in the New York Times people complaining about. It also happens to be that online advertising is what hurts the economic background of the journalists who are writing these stories.
So there might be not total coincidence there, but bad tech lash journalism is leading to bad tech lash legislation that is not based upon the actual risk. There are real adversarial things that are happening to kids right now in the state of New York that they are not prosecuting. There are crimes against children that are not prosecuting. We just did a humongous report on all the failures of the child safety ecosystem of which the state of New York has done absolutely nothing to address. And so I have no time for these grandstanding politicians who skip over the hard part that for which the solutions are, again, often authoritarian. They are the Chinese Communist Party solution, which is, you show ID to get online and to get a sim card and jump into thinking that you could just magically make things better without addressing the actual things that are harming children online right now.
Evelyn Douek: Yeah. I mean, it's pretty clear that the thing that they're thinking about here is overuse, kids spending way too much time on basically TikTok and then not enough sleep, like the overnight notifications, the idea that it's interfering with the sleep. But it's just such a blunt solution to that problem because the idea that chronological feeds are inherently better for kids than algorithmic feeds in the absolute is just not as supported by the empirical reality. And indeed, if you want platforms as politicians often do, if you want platforms to be elevating, feel good content and wholesome content and educational content and reduce spam and porn and divisive or aggressive or hate speech, that kind of content that makes you feel bad when you're using the platform.
All of that it turns out requires algorithms and recommendations and information about the user and their device. That is the kind of thing that's going to be made unlawful by this act. So blunt tool, the good news again, is that when the Supreme Court issues its Net choice decision, which literally could be tomorrow, the safer money is on the idea that this is probably going to be unconstitutional. So the idea that you can force a platform to not engage in this kind of editorial decision making in how they decide to present their content to users, which is what algorithmic feeds are, that that's the central battleground of these net choice cases. And when or if the Supreme Court decides well, when the Supreme Court decides that case, and if they say-
Alex Stamos: If, if, who knows Evelyn, maybe they will never release it.
Evelyn Douek: I'm getting increasingly pessimistic. That's right.
Alex Stamos: They're just going to end the term and not say anything about it, and keep everybody guessing. Yeah,
Evelyn Douek: These are the nightmares I've had. Welcome to my Nightmares. Yes. So when they decide the cases, if they say, platforms have this editorial discretion, this First amendment protected discretion to have their feeds as they want, then that will pretty much spell doom for this law. And indeed Net Choice. The trade association has already strongly indicated that it thinks the law is unconstitutional. And so this will be yet another chapter. It seems likely in the Net Choice restatement of the law. Governor Hochul, however, did tell CBS News in an interview that, "We've checked to make sure, and we believe it's constitutional." So I guess they've really done their work over there. So I guess they're not feeling worried because they double check.
Alex Stamos: Oh, did they double check?
Evelyn Douek: Yeah, exactly.
Alex Stamos: They double checked.
Evelyn Douek: They just checked.
Alex Stamos: Might want a triple check next time.
Evelyn Douek: Yes.
Alex Stamos: They asked ChatGPT, "Is this?" Yeah. Well, hopefully it was an algorithm that told them that was constitutional.
Evelyn Douek: Exactly. Famously, good legal advice from ChatGPT. This was the state that did pass the law titled Hateful Conduct Prohibited when it was insisting that it wasn't trying to outlaw hate speech. So it doesn't have a great track record on careful drafting for constitutional review.
Alex Stamos: What you're asking about my T-shirt that says I want to ban hate speech. While I say that... Yeah.
Evelyn Douek: Precisely, that is almost the extent of the debate. It wasn't much more sophisticated than that. And that is our roundup of the news this week as we wait for our world to be either turned upside down or to have longstanding principles reaffirmed by the Supreme Court any day now, I don't know, is there sports news, Alex?
Alex Stamos: There's no sports news. It is the summer, and so there's not college sports, but there is-
Evelyn Douek: I knew that for sure.
Alex Stamos: There is one victory in the Atlantic Coast Conference that I was interested in exploring, though. I was wondering who won the teaching award for Stanford Law School this year for the teacher who strives to make teaching and art? Did you catch, were you [inaudible 00:39:12]?
Evelyn Douek: I thought you were going to ask me a sports question then. I was getting very worried.
Alex Stamos: No. Do you know the answer to this one?
Evelyn Douek: I was getting very stressed. I do know the answer to this one, and it was.
Alex Stamos: Because it was you. Evelyn won the John Bingham Hurlbut Award for Excellent in Teaching at Stanford Law School, her first year teaching at Stanford. Wow,
Evelyn Douek: Thank you.
Alex Stamos: You should just quit teaching now. You should quit, you're done.
Evelyn Douek: That's right, it's all downhill from here.
Alex Stamos: Anyway, congratulations, Evelyn. Well deserved. And I always knew your students loved you, but now it is official and there's a very official picture of you in your regalia looking a very... You did not burn your Harvard cow out of anger.
Evelyn Douek: It turns out you can hire them from the Stanford bookstore if potentially hypothetically, someone didn't hold onto their treasured robes, you can obtain a copy.
Alex Stamos: Oh, wait, so they have a library of different universities.
Evelyn Douek: Exactly, yes.
Alex Stamos: That's awesome. Okay, well, congratulations, Evelyn.
Evelyn Douek: Thank you.
Alex Stamos: Well deserved. So if you're listening to this and you're going to be entering Stanford Law, now you know whose class you should take because they are the winner of the teaching award for 2023, 2024.
Evelyn Douek: Well, thank you for embarrassing me in that way, Alex. And not in the way of asking me a sports question that I didn't know the answer to.
Alex Stamos: I can do that too. [inaudible 00:40:28].
Evelyn Douek: This is slightly more preferable, but yes, I appreciate it. And in order to get out of this awkward situation, I'm going to read the credits. So this has been your Moderated Content weekly update. The show is available in all the usual places and show notes and transcripts are available at law.stanford.edu/moderatedcontent. Thanks to those of you that gave us a rating recently, we see you and appreciate it. And to the rest of you, what's going on? It's summer hour. You've got spare time in your hands, use it wisely. And this episode-
Alex Stamos: Oh, be careful who you asked if there's ratings. I'm a little... Not everybody should be rating this.
Evelyn Douek: Yeah, that's right. Somehow target that recommendation, Brian, to our loyal and positive listeners if you can.
Alex Stamos: Well, but not with an algorithm because there might be kids who listen to this.
Evelyn Douek: That's right. Because otherwise we could run into a problem in New York and we may be fined. Otherwise, this episode wouldn't be possible without the research and editorial assistance of John Perino, our policy analyst extraordinaire at the Stanford Internet Observatory. It is produced by the wonderful Brian Pelletier. And special thanks to Justin Fu, who this will be the last time you hear him in the credits. He's off to law school and will no longer be my assistant at Stanford Law School. So thank you, Justin, for all your hard work over the past couple of years. I really appreciate it, and wishing you the best of luck out there.
Alex Stamos: Thanks Justin.
Evelyn Douek: And catch you next week.