Moderated Content

MC Weekly Update 3/6: A "Comprehensive" Episode

Episode Summary

Alex and Evelyn discuss the snowballing TikTok bans (won't someone remember the First Amendment!); their new "screen time limit"; Twitter's "new" violent speech policy; Meta's response to the Oversight Board's recommendations on its X-Check program; FTC's blog post on AI hype; post-Dobbs data requests and content demands on tech platforms; and Google's "civil rights audit."

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to 

subscribe and share

 the podcast with friends!

Episode Transcription

Speaker 1:

For markets with heightened election misinformation concerns, Google ensures that employees have language fluency. So thank you Wilma Hail for that advice.

Speaker 2:

Right. That recommendation costs $10,000.

Speaker 1:

That's right.

Speaker 2:

Just for that whole...

Speaker 1:

I would do it for less if you... Actually, I wouldn't. Academic integrity, blah, blah, blah, blah, blah. But my God, in an alternate career, that is the job I want.

Hello and welcome to Moderated Contents Weekly News Update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. We're going to jump right in this morning. So we're starting with the TikTok, TikTok, where the bans are accelerating around the world. So Canada and the EU's diplomatic arm, the External Action Service have joined the European Commission, the European Council, at least five national European governments including major cybersecurity powers, Germany, the Netherlands, Estonia, the Danish Ministry of Defense, a bunch of US states, and in banning TikTok from official devices. And the White House has now set a 30-day deadline for government agencies to ensure that they don't have the Chinese owned app on their devices, on the federal devices implementing a deadline for the ban that was passed last year. And also in further momentum, the House Foreign Affairs Committee has approved a bill that would give the president authority to ban TikTok nationwide and has moved the legislation to the house floor on a party line vote.

Now it's unclear what's going to happen to that and whether Biden would veto it or use those powers. So, I don't want to spend too long on the particulars of that, but I think it's just sort of this general trend. We can see the narrative there's developing here of the snowball effect of just these bands coming in all over the place. Now, we've talked about this a bunch, but I think given that it is still snowballing, it's worth just sort of taking a moment to step back and recap our thoughts about this.

And so, Alex, there's two camps on this. One camp is this is a dire national security threat. TikTok did things like track the location data of journalists in order to see who was leaking from the inside. It has the potential to manipulate the newsfeed or the For You feed to push Chinese propaganda. The other camp is deeply skeptical and says, "Look, this is no worse than any other tech company in its data practices. And if China or Russia or Iran wanting to get a data on Americans, they could just go and buy it from data brokers. And so this is a complete sort of xenophobic overreaction to the threat that is putting a bandaid on a deep wound and that we need a national privacy law." So what's your sort of view on this? Which camp is right?

Speaker 2:

Yeah, so I have a mixture of a bunch of these different camps. So I do think there are certainly risks of Chinese companies being dominant in consumer tech services as well as enterprise. But we're talking about consumer tech services here. For me, most of the risks are related to data access. There's no good examples of kind of mass manipulation by TikTok. There have been individual censorship decisions, content moderation decisions, depending on who you ask that are related to the political proclivities of the Chinese Communist Party. But most of those are in the past. We haven't seen any of those recently, but the amount of data they're collecting is pretty significant. And having code running on the devices of every young person in the United States does pose a risk. That being said, if you're worried about that, like we've said before, I think the number one Chinese app that we should be careful about is WeChat.

Because WeChat is used by the entire kind of Chinese-speaking diaspora. It's the only way you can talk to your family in China. And I know for a fact that WeChat is used to manipulate and watch and surveil Chinese citizens who are coming to school in the US or working in the US is used to reach out to them in situations where the government wants stuff. So WeChat is where I would start if you're going to start with a specific company. I think overall, the statement that we just need a privacy law that covers these issues. And then if you violate it, you can be in trouble. Is the right move here. We still lack a federal privacy law. We have every state coming up with their own privacy laws. It's going to be a total mess of if we have 50 different privacy laws in this country.

And one of the things that GDPR didn't do was lay out the difference in processing in authoritarian versus democratic states. And I think that's a situation. This is a situation where the US could do better than the Europeans as well as set a good stage for cooperation with Europe and Australia, Japan, a variety of free nations who want to contain Chinese consumer apps. So I do think all of that's true. I also think, you know, you and I talked about this a while ago, just straight up saying you can't have TikTok in the App Store. Seems like not the way we do things in America.

You can talk to the actual constitutionality, but it's certainly un-American for us to say, "We're just going to force Apple and Google to drop TikTok." And so from the functional, how do you actually ban TikTok? I think that's really important because we do not want to copy the Chinese great firewall. We do not have a great firewall of the United States, and building one is not an appropriate thing. It's not something we should do as a free society. So whatever the outcome is here, it needs to be implemented in a way that's consistent. And I think this is a number of activists have now picked up on this and you and I, I think we're a little ahead of the curve on this of saying just banning the app, and especially putting pressure on Apple and Google to remove it from the app store seems totally inappropriate.

Speaker 1:

Governments that have banned TikTok entirely include India, which is just a real leader in tech policy and freedom of expression in this space. So as a geopolitical move and a foreign policy move, that's the kind of message that you're sending, which is bad. And then as to the constitutional matter, I mean, so my notes here on this, just read in all caps. FIRST AMENDMENT, FIRST AMENDMENT, FIRST AMENDMENT, because it is amazing to me how little the First Amendment gets mentioned in these conversations about banning a communications platform. The ramifications on freedom of expression are huge. If there is problematic content on the platform, it is like the vast majority of this is legal speech core protected speech, most of it not particularly interesting to me, but certainly not unconstitutional, not unprotected. And so it's just wild that the idea of a national ban totally disproportionate response to the threats in terms of the First Amendment jealously guards against over censorship, over breadth to the idea being that we want to protect, overprotect speech rather than underprotect it and err on that side of caution. And so it's just wild.

Speaker 2:

And fortunately there's some lawmakers who are thinking of this. So Mark Warner, Chairman of Senate, Intel, a man who you can in no way call pro Chinese, right? Specifically is looking at the Berman Amendment so that the laws passed in the 1980s to protect foreign speech from overreaction during the Cold War. And he specifically said, "My goal is not to replicate the Chinese approach. That's the opposite of a market-based system." So I think fortunately there are, there's kind of crazy bills from the house as we, I think, expect from the house at all time, but especially the next couple of years. But the Senate is being a little more thoughtful of, in a bipartisan way, trying to limit the risk from TikTok in a way that is more constitutional. So I guess we'll see.

Speaker 1:

Right? And I think as crazy as the world is right now and the courts, I just don't see this being upheld in court, even if it gets rammed through in some sort of bizarre way through Congress. But so we could just sort of extend this process and spend many years arguing about it, or we could just recognize that this is not a proportionate or effective way of meeting the threat that we have here. Another way of dealing with the TikTok issue, TikTok has an answer to all of our concerns about this app. Now, Alex, I have a comprehension test for you. I'm going to describe a feature. And then I'm just going to ask you a yes or no question. So are you ready?

Speaker 2:

Oh, are you going to do the congressional thing, yes or no?

Speaker 1:

Yes, exactly. Yes or no, tell me. Yeah, except this is, you know, are not going to be exposed to future legal liability as a result of your answer to this question.

Speaker 2:

For all the things this podcast is, it is not under oath yet. It is not.

Speaker 1:

Not yet. Yeah. I look forward to that day. So every account belonging to a user below age 18 will get a prompt when they hit the 60-minute mark of usage for a day. That requires them to make an active decision if they want to keep using the app and they have to enter a passcode saying, yes, I will keep using that app. You can also opt out of this feature if you want to, but they will get a weekly inbox notification with a recap of how much time they have spent on TikTok. Now here's the yes or no question, Alex, with that feature that I have described to you. Is this a screen time limit?

Speaker 2:

No, I would say no. If I had, which is a possibility, if I had to write a expert witness, if I had a depositions expert witness, my answer would be no.

Speaker 1:

Yeah. Well, if you asked most reporters apparently in the country this week, they would've answered yes. The TikTok got a bunch of headlines saying it had introduced a screen time limit for users under 18. And I mean, I just think give that PR person a medal, whoever within TikTok came up with that, it's brilliant. It's fantastic, but there is just no reasonable way in which you can say that having to enter a passcode after an hour and being able to disable that is a limit on your screen time usage. So good work there.

Speaker 2:

Yeah, this does bring up a legitimate issue, which is there's not a good standard between the operating systems, which is really the only place where you could truly enforce a limit like this. And apps, and I feel like there's a huge missed opportunity by industry for some self-regulation here by coming up with ways that parents can set screen time limits at the operating system level that then flows through intelligently. You can do it in an Apple device, and it basically just totally turns off the app and it's kind of dumb and you can't have good exceptions and all that kind of stuff.

And I wish industry got together on, these are the issues we see with kids and one, made it easier for parents on the setup of device because the screen time stuff is so incredibly complicated. The odds of a kid understanding how to bypass it versus 99% of parents is extremely high. But also that there's no real interaction between the screen time system and the apps themselves. And so I do agree that this is obviously just a marketing push by TikTok, but it also does demonstrate kind of a lack of leadership from Apple and Google that they have not gone together and figured something out here.

Speaker 1:

I will say, to be fair to TikTok for people under 13, apparently it is the parent or guardian that will need to enter the existing passcode to enable 30 minutes.

Speaker 2:

Yes. Because in my experience, every 12 year old does not lie about their age when they sign up for... I mean, that is completely experienced of anybody who's worked at child safety is that 12 year olds do not know that you can set your birthday as January 1st, 1969. He he he. And therefore bypass every possible control that you have in place.

Speaker 1:

No, everything online is misinformation except the ages of users when they sign up for an app. That's...

Speaker 2:

Right. All of those 115 year olds you see on Instagram, they're really 115.

Speaker 1:

Looking good though. I want to know. I will definitely buy the supplements that they're selling. That's all I'm saying.

Speaker 2:

Yeah. Moisturize apparently. Yes.

Speaker 1:

Exactly. Okay. Over to our little Twitter corner for the week. Not a huge amount of news here. I mean, most of the news is non-new, which is that we still don't have massive changes to the API or we were told that the newsfeed algorithm would be open sourced at some point this week.

Speaker 2:

Right. We're told that Elon Musk was stepping down as CEO.

Speaker 1:

That's right.

Speaker 2:

We should actually just start this segment should be a list of all of the promises that were made that have yet to be fulfilled since the beginning. Because I think there's probably about a dozen things that they've announced that they haven't done. Which for anybody who owns a Tesla, this feels very, at least in this case, we don't have to pay $10,000 to Twitter for the things that they're promising.

Speaker 1:

Yeah, I mean, actually that highlights an interesting dynamic in tech reporting space. I mean, it's very hard to report on non-events, and so you do get a lot of press coverage, a lot of making massive announcements, and then it's hard to follow up six months later and say, "Well, this never happened." So there were tons of stories saying, Elon Musk promises to step down as CEO. And I have seen basically no stories saying, he still hasn't stepped down as CEO. So here we are filling this massive vacuum in the national reporting environment. So there's your public service. The news of the week was that Twitter does have a new violent speech policy that says they have a zero tolerance policy towards violent speech in order to ensure the safety of our users and prevent the normalization of violent actions.

This new policy is a lot like the old policy. This is not a massive policy change. It includes a banning of things like coded language and dog whistles that are used to indirectly incite violence. And they say that they will take into account context, but will immediately and permanently suspend any account that violates this policy. So I mean, it is basically an old policy with a few tweaks. The question is, are those tweaks meaningful when you are enforcing a policy at scale, similarly with big announcements, get news and then non-events don't get news, big policy announcements, get news and follow up about how that's actually enforced, doesn't get a lot of news.

And that will be a big question. I mean, they have this policy, but do they have anyone around to enforce it? Are they going to be training people on the policy? So I don't really have a lot to say except that this exactly is whatever I predicted, which is, this certainly goes a lot beyond what the First Amendment standard is for inciting speech. And Elon Musk has discovered that you need content moderation rules in order to run a social media platform.

Speaker 2:

Yeah, this is weird because I feel like this policy came through a warp from an alternate universe in which Twitter is still a public company and is operating as normal because it's just kind of a normal update to a policy that tweak some things, like you said, it's mostly the same. They've tweaked a couple of things. This happens all the time with companies is that you need to reinterpret your policy based upon what's going on. I think this demonstrates the big gap between Twitter's policies and their enforcement.

Because, I personally, have had, directed at me, things that have violated this policy this week. So it's just like I haven't done a thorough academic study, but just based upon my minimal Twitter usage and the fact that I keep on getting mentioned in conspiratorial things and then people send me things that are clearly in violation of this policy. So I think what we're going to happen is Twitter's going to continue to have what seem pretty reasonable policies, but then very selective enforcement, certainly if you violate this policy against Musk himself, you will be banned immediately. And so we're entering a world in which the policies really mean nothing on paper, that it's all going to be upon interpretation and what selective enforcement gets chosen on the back end.

Speaker 1:

Right. Shame we will never understand or know anything about that because as we talked about last week, Twitter has suspended all transparency reporting. So whether this policy is being enforced five times or a hundred times or a million times, we will have no idea unless they hire a bunch of people to start producing those reports again. So, excellent. In our segment on a platform that Twitter has managed to make look good, and Musk continues to be Zuckerberg's best friend. Meta has responded this week to oversight board recommendations on its crosscheck policy. So the board had given Meta a policy advisory opinion on December 6th last year that had 32 recommendations. The Crosscheck program, we've talked about it a little bit before, it's what is sometimes called Meta's Whitelist program. So it's a list of high profile or otherwise somewhat important accounts that when they hit an alert for having violated a policy, they go get sent for an extra review before that content is taken down.

It's a system in place to prevent high impact false positives. So the idea being that lots of people's going to see this content or it's a public figure or a news outlet and enforcing against them incorrectly is more costly than enforcing against the average user. Meta congratulated itself on fully implementing 11 recommendations, partially implementing 15, and still assessing the feasibility of one. The board tweeted that this was a landmark moment, so they both sort of patted themselves on the back on a job well done here. I think that there is a lot of good stuff in the response to the recommendations and some things that will result in the improvement of the program, but I don't think this is a revolution that it's being presented as, and that both the board and Meta have an interest in overstating how much of the change is there as a result of this back and forth.

So we should keep that in mind when evaluating the response. So for example, there are a number of recommendations that Meta lists as implementing in part where they say things like, "We agree with the goal of this recommendation and believe we can achieve its aim through other means." So we're not actually going, we agree with the vibe, we are going to implement the vibe of this implement of recommendation. And they also say they're implementing recommendations fully when they have already, when they're just going to continue doing things that they already do. They rejected a number of recommendations on the basis that they don't want to disclose who is on the list for safety reasons and to prevent gaming by bad actors, which I think is a fair response. There's a lot of stuff that's in the long-term goal bucket, but there is a lot of agreement for things like extra transparency reports about the operation of this program and speeding up the review.

Because one of the criticisms that the board had made was that things sit there on the platform for a very long time, five days, sometimes many, much, much longer in certain parts of the world that don't have well-resourced content moderation review teams. And that means that the harm from that content, if it is in fact violating, is magnified. Now, there's one thing that I wanted to pull out to talk about with you, Alex, because we have sort of talked about this before. So the board had recommended that Meta should take the review pathway and decision making structure for content with human rights or public interest implications and making sure that that's devoid of business considerations. And the way that it said that matter should do this was separate the public policy or government government relations teams from those charged with enforcing the policy. Now, for those, because this is audio, you can't see that Alex just sort of punched the air because this is a recommendation that you've been making for a very long time. It is also...

Speaker 2:

Right, including when I was at Facebook internally of this is a key, key problem with the company is government relations is reports up to the same person as people who make product policy.

Speaker 1:

So here's what Meta said. Meta is implementing this in part, so it's commitment. Yeah, don't punch the air too soon. We will continue to ensure that our content moderation decisions are made as consistently and accurately as possible without bias or external pressure. We will continue to do what we are doing. While we acknowledge that business considerations will always be inherent to the overall thrust of our activities, we will continue to refine guardrails and processes to prevent bias and error in all of our review pathways. So I mean, I don't read this as we're going to change anything major. I don't think the reporting structure is going to change. So I mean, I think, why don't you talk a little bit about why you've made this recommendation and why you think it's important and why it is potentially disappointing that Meta has not implemented it.

Speaker 2:

So on the first crosscheck itself, I think they ended up in a reasonable place. I feel like a lot of the explosion over crosscheck was by people who weren't really thinking through, you cannot have a situation at any platform where all 50,000 content moderators can turn off Joe Biden's account. You just can't run... As the security guy, you just can't allow that to happen. So you are always going to have to have a mechanism where some people are super special and that only really high end people. The problem there was, as the oversight board properly pointed out, is you shouldn't change the rules for those people. It might take longer to enforce because the cues are being handled by full-time employees and by people who you trust more to have that power. But we see from the Twitter world, this happened over and over again where individual Twitter content moderators made crazy decisions or their accounts were taken over that ended up with a bunch of security breaches.

On the other one, yes, it's extremely disappointing that they're not taking this opportunity. The core problem, I've always said on the international side of Facebook, is that the content moderation people and the government affairs people report up to the same people. They are part of the same org. They have the same meetings. And so any decision that is about content moderation outside of the United States where you need to turn to local experts, those experts are inevitably tied to the current ruling government. So one country that's had some very interesting issues recently is Israel, we're not going to get into the Israel Palestinian conflict, but there's a lot of interesting content moderation things going on in Israel. Israel has a super right wing government now that's like the government itself. A lot of it is to the right of Netanyahu, which is kind of amazing and is pulling him right.

And that's a great example of there's no Palestinian Authority Government Affairs Office. There's the person who runs Facebook Israel, used to work for Benjamin Netanyahu. She is very politically close to him, and now she works internally. And so whenever you have internally an issue that has to do with, what is terrorism, how should you handle content moderation around struggles between settlers and Palestinians, for example, the person you ask happens to be a person who their job is to keep the Israeli government happy with Facebook.

And so that is a inevitable... It is just impossible for any person to have both of those kind of goals and to not have a conflict between them. And so the only reasonable way to fix this, it would be to separate out the product policy content moderation policy from the government affairs team, and then have rules of, you can take input from the government affairs team, but you have to be very careful. And we're not even close to that. Other companies have done that, like Twitter before Musk did that, maybe Twitter still has that in that they just fired everybody. And so I guess you could still say they have that.

Speaker 1:

They don't have people interfering with the decision because the decision's not being made decision. Right?

Speaker 2:

Right. But other companies have done that, and I think that this will continue to be a real problem. You just can't expect that you hire people. And there are always more subtle issues. I've talked about this publicly before. For example, if you hire a college educated person who speaks English in Turkey, they're going to have a specific set of political beliefs that often are not great for Kurds, for example. And so you'll end up, even if you have that kind of official separation, just based upon the kinds of people you can hire as an American tech company of the places, people who can get visas to live in the countries where you have these centers such as in Dublin. So people in Turkey who can get visas to live in Dublin have a certain kind of background and certain political affiliations. Those kinds of things end up twisting all this stuff no matter what. But certainly having a model where people have explicitly their job is, I'm the government affairs person. I'm here to keep Country X from regulating Facebook. Having them involved in these content moderation decisions is inherently dangerous.

Speaker 1:

Right. And the response there, I guess for Meta would be, yes, but they bring local context. It's like, yes, but what exactly is that context that they're bringing? I think overall though, I think this was actually a very beneficial process. I don't think it was revolutionary, but I think it did bring a lot of openness. I learned a lot about the crosscheck system, which was previously extremely opaque. I think there's positive externalities for the content moderation debates more generally, because we talk about these competing equities. We talk about why you need what was previously reported as a "whitelist system," but like a crosscheck system, a double check review system, and understanding why that's important in the content of a huge platform. I also really appreciated the candor of managers saying, "Look, we're a business and economic considerations are going to be an important factor of what we do. And so there are people on this list that are there for business reasons and not there for political or social or other reasons."

And it's like, yes, we all know that, and I think we should be much more candid about these aren't benevolent public squares that are operating for some vision of a better world. They are businesses that earn money, and some people are on that list for those reasons. I think that's been productive too. On the other hand, does anyone actually read these announcements apart from me and the parents of board members? I don't know. I don't how much that this is actually doing in the world. I think it's interesting.

Speaker 2:

Well, I don't think that's a complaint we get together because you and I talk about transparency all the time, but the truth is, is transparency is never going to be something that normal folks open up their newspaper and then they see a huge article about a transparency report from that is always for the wonks and the academics and the regulators to be able to keep an eye on these companies and then to interpret it for the rest of the world. So that's what this podcast is for. If you want transparency interpreted for you, you can get it from real professionals.

Speaker 1:

Yeah, there's your ad read for the week folks. Tune in to Moderated Content for real transparency. One other thing actually that is really interesting about the board is how much of these kinds of things do get written up in newspapers? You go open, you would've opened The Wall Street Journal this week, and you would've seen a big story about these oversight board responses. So to the extent that this was intended to be a PR campaign for Meta, I think there has been quite a bit of success on that front, actually. The board releases decisions and it gets written up in places like the New York Times, The Washington Post, Reuters, and so I think interesting.

But there is that dynamic that we were talking about at the start of the episode. Hey, we have a nice theme running through this episode though, which is that a lot of the devil's going to be in the implementation and the details. So we have these big reports at the moment of decision. Let's see the transparency report that Meta releases in three months, six months time on the Crosscheck system, and whether that's actually produced any meaningful information. We're not going to have a Wall Street Journal article about that.

Speaker 2:

Yeah. But this is also, I think one of those situations where Elon Musk is, again, Mark Zuckerberg's best friend here, and that now the media understands what it's like when it's the CEO of the company makes massive decisions based upon a thread with cat turd two. Or you could have 50 law professors argue about it for six months and then write these 100 page reports, right? I mean, apparently those are the only two ways we can make content moderation decisions.

Speaker 1:

So yeah, I hope that Musk is getting some nice gold make gift baskets from Zuckerberg and the oversight board for all of the image and PR work he's doing for them. Okay. Speaking of PR, the FTC published a blog post, a blog post this week, warning industry about making false claims as part of the AI hype around chat bots and other products points for engaging writing. This blog post starts with a creatures formed of clay. A puppet becomes a boy, a monster rises in a lab, a computer takes over a spaceship, and all manner of robots serve or control us. For generations we've told ourselves stories, blah, blah, blah. Is it any wonder that we can be primed to accept what marketers say about AI?

So someone had fun, some intern in the FTC had a good time writing this blog post. The upshot is the FTC is saying, "Look, false advertising is the bread and butter of what we do. And if you are making hypes about what your AI products can do, beware that we have laws against that." I'm just curious for your response here about how much of this AI hype do you think there is? How much of this is just FTC hype saying we are watching as everyone's talking about AI? Any thoughts on this one?

Speaker 2:

Okay. So I mean, my overall impression is what's going on right now is that Silicon Valley is hurting, stock prices are way down. It is a very competitive environment. They've had layoffs and such. People have been negative on tech for several years. There hasn't been a big breakthrough that have really changed people's lives probably since the cell phone, right? Since the iPhone we've not had a true technological revolution. We're kind of on the end of the cycle for cellular phones and then people have watches and stuff. But none of those things have really been a big impact. And so now there's a race for AI to be the next revolution. Silicon Valley has these cycles where it is, it is a hype cycle, but it is also a technological cycle where people focus on something and then you spend years and years kind of getting the benefit of whatever technological change you have.

But because we're at the start of that, traditionally what you have, is you have big companies being disrupted by little companies at the start of these cycles. In these cases, these big companies and all this money are trying to not get disrupted. Microsoft made this incredibly smart move by investing in open AI, which is a good protection. Google and Facebook have internal AI labs. What has held them back has often been the risks and downsides of AI learning from the last cycle of the risks and downsides of social media on individual phones. They realized that they want it, and Microsoft broke the damn here by Microsoft saying, "Screw it. We're just going to ship it anyway." Microsoft made a very aggressive move to ship open AI integration with [inaudible 00:28:41]. Even though it's not ready from, frankly, from a trust and safety perspective, AI is not ready.

So there is a legitimate problem here. The other legitimate problem is the FTC keeps on thinking that they can just invent the law without Congress acting. And so the FTC did this around cyber where they use their authorizations from what the 1920s or 1930s around false claims to effectively try to create cyber regulation. The FTC does a terrible job of regulating cybersecurity in this country. Just want to say, they go and they pick individual companies, they destroy those companies or make a huge pain in the butt for three companies a month, and then the other 10,000 breaches don't get addressed.

And there's nothing you can look at. You can't go to the FTC and say, "Hey, what are the security standards I should live up to?" There's no kind of documentation. So of all the agencies to regulate AI in the country, I think the FTC is one of the worst. They don't have any knowledge here. They don't have any skill, and certainly they do not really have the legal authorization from [inaudible 00:29:35] signed by FDR on what the rules should be around AI. So I think there's a legitimate set of risks here, but if you're protecting it just on false claim, the risk is not false claims of advertising. The risk is the actual implementation of these products and what kind of safety issues they have.

Speaker 1:

Have. Yeah, I mean, just a little bit in defense of the FTC is like, well, they're acting because no one else is. And it would be great also if they were better resourced. And I mean, either Congress acts and creates another agency or gives the FTC more explicit authority, but also, we have this whole new world of risks and issues and problems and not a whole new swath of resources, federal resources dedicated to actually doing this job properly. So, yeah.

Speaker 2:

Right. But this is another demonstration number of people talked about a digital agency, or if we had a real privacy law, we'd end up with the data protection authority. That's not the FTC, that is not about false claims, but is it about actual substantive rules? So it sucks that Congress doesn't do anything, right? It sucks that we have total political gridlock, but in a democracy, we're supposed to have democratically created rules, not rules created through lawsuits by the FTC and big companies settling with them and then creating precedent. That's what they've done on cyber, I'm just going to say, as a person to cyber, the FTC has been disaster.

It's been a total wreck. Right? So, I'm sorry, I don't see any upside here, the FTC getting involved, I mean, soon it'll be the SEC, right? If you're a public company of AI, you have to file something to Edgar about your AI. It's just the dumbest way we could possibly do this regulation is by these agencies using their New Deal authorities to try to regulate when the actual substantive issue here for which we don't have any rules on, and we really should.

Speaker 1:

So in another area of democratically or not so democratically enacted rules, I actually don't know if we've talked about Dobbs yet on this podcast. Don't need a lot of background. The Supreme Court decision last year are raising the constitutional right to abortion, but everything is a content moderation issue. And so it's somewhat surprising that we haven't necessarily talked about this. It's only a matter of time that it would come up, and it's only a matter of time that the rubber would hit the road in terms of the way that this is playing out online. So sort of two aspects of the story that we want to talk about this week. One story was an article doing the rounds online this week about tech companies complying with law enforcement requests for data about their users aiding in prosecutions of illegal abortions. And Alex, I know you had some thoughts about these.

Speaker 2:

Yes. So there's read a couple of articles. This is effectively, there's not much new in this new article in Business Insider from a couple others that came out right after the Dobbs decision. And the article does that thing where it is factually correct while giving people the completely wrong understanding. People are reading this article and they're not understanding. The lead here, the first paragraph is, I'll just read it. As abortion bans across the nation are implemented and enforced, law enforcement attorney and social media platforms to build cases to prosecute women seeking abortions or abortion inducing medication. And online platforms like Google and Facebook are helping.

So the entire thrust of this article is that companies are going in and intentionally going and helping try to enforce abortion bans. That is completely not true. So yes, they are helping because they are legally required to reply to search warrants. The only good thing about this article is they talk to Eric Goldman, who basically says that, but that's at the end of the article. And effectively, nobody read that long because if you look at any of the online conversation, it's totally wrong. So what is the problem here? One is, the Supreme Court took away a fundamental right, the right to reproductive privacy and allowed states to make abortion illegal. I'm against that. I don't know your position. Are you pro Dobbs? Do you think Dobbs is a [inaudible 00:33:18]?

Speaker 1:

Oh, yeah, this is, let's get the law professor on record. The junior law professor pre tenure. Yes. Very. No, I'm with you on this one. Yes.

Speaker 2:

Right. Okay. So in our country, states have the ability to pass laws and traditionally, for a long time, we effectively assume that the state laws that make things crimes, that effectively all states while they're slight disagreements, agree that those [inaudible 00:33:45] should be crimes, right? And so the things like search warrants, arrest warrants, all that kind of stuff is proforma inside of the United States, between states, you know, you can have extradition hearings, but they're not like extradition hearings when somebody gets extradited from Italy to the United States or from Italy to China, they're basically proforma hearings of like, "Yes, you have an arrest warrant in Arizona. We're sending you to Arizona. Good luck." That is all changing because of Dobbs, because now we have this huge split between states. And one of the powers that is granted to states is that there are judges can sign search warrants.

Now, there are rules around this. At the federal level, there's [inaudible 00:34:18] SEA, there's also a bunch of Supreme Court cases and ninth circuit cases interpreting both the constitution and the federal law that restrict states. But traditionally, state search warrants have to be signed by a judge if they want to get to content. Subpoenas can get basic subscriber information, but they have to be signed by a judge. And from the state, you don't know what that crime is. So what's actually happening here is you've got prosecutors. So one of the big examples is in Nebraska is a case where the prosecutor was able to get chat logs between a woman, and I think her friends and her mother and such about an abortion that she sent on Facebook Messenger is that Nebraska, the prosecutor accuses a woman of a crime, goes to a judge.

A judge signs a search warrant. When the search warrant gets to the company, it does not have to say what the crime is. It doesn't have to have an affidavit. Usually federal search warrants you have that. So you'll have an affidavit attached to the search warrant that says, this is what we're accusing this person of that doesn't have to exist. It's not legally required under state or federal law. And so a search warrant just comes in and says, this judge says, "Give us the chats for this person in this period of time." And that is a legally enforceable court order from a judge. And so the companies don't really know what to do if they get that because there are lots of legitimate crimes actually being committed in Nebraska. And so every once in a while, if they slip in an abortion related case, there's no mechanism for the companies to know that.

So, the idea that they're just intentionally going and turning over this data is not true. The real problem is that we've given all this power to prosecutors to prosecute women for a crime they should not be able to do, and they have this ability to enforce it interstate. So there are some possible solutions here. None of them were covered in the article. Companies could get more aggressive. One of the things that companies used to do a lot of that's become really hard, was inform people before they answer a search warrant. Hey, there's like you, I don't know if you want to talk about this, but there's been this history of cases where post Snowden, especially the tech companies, tried to go stop warrants. And basically the court says, "You don't have standing to do that. You do not have the ability to argue this warrant's over broad or whatever only the target is."

And so the companies say, "Okay, great. Well, we're going to tell the target of your search warrant, and then they can challenge it. If they can get the ACLU, they can get their own counsel and then try to challenge your search warrant." And then the government's like, "No, you can't tell them." Right? And so a huge number of search warrants, by default, have a gag order attached, where the court is saying, "You are not allowed to tell this person we're issuing the search warrant." Now, I don't know in this case, in this case, it seems that this woman knew she was being investigated. So it's probably not a situation here, but there are going to be a bunch of other situations in which women are not going to know their data is being grabbed because a gag order has been placed on the company to not inform.

And so one thing companies could do is they can get back in the mode of fighting those gag orders more aggressively. There's like a huge battle over this and a bunch of cases that the companies ended up losing, actually. But they could try to reopen that and get that to the Supreme Court. Unfortunately, I think we all know what the Supreme Court would do if it was especially an abortion related case. The second thing they could do is they could try to put requirements on states to have more data in their search warrants. But in the end, if a judge says, "This is a search warrant, you have to comply." What happens if you don't comply with a judge's order? Evelyn.

Speaker 1:

Good things, excellent, good things,

Speaker 2:

Excellent, good things. They send you a little chocolate box. Is that what they do?

Speaker 1:

That's it. One of the gift box baskets we were talking about earlier. Yeah.

Speaker 2:

They order you to be arrested, right? Eventually. And they can try to disbar your lawyers and all kinds of stuff. And so now what you could try to do is you could try to create a shield that California, for example, could say, "We are not going to extradite anybody based upon anything related to an abortion related crime." But I'm going to guess the US Constitution has something about this. Is that true, Evelyn? That there's stuff in the constitution about states giving something about credit cards or something?

Speaker 1:

Yeah, that's right. Full credit and faith. Yeah, the full faith and credit clause. Yeah. This is great Socratic method you're doing here.

Speaker 2:

Okay, great. Yeah, perfect. Right. And so I think a lot of people, I've seen a number of civil liberties lawyers have talked about what kind of law could you pass in California to try to prevent companies or to protect them so they can make the decision to do this? And the answer in the long run is probably, it gets really dangerous because if you say that California is not going to accept arrest warrants from Nebraska, just think about how red states will turn up and criminalize that as well. You commit a gun crime in California, but if you get to Texas, you're fine. Then you end up in a really crazy place. And so realistically, in this situation, when you're talking about message content, the only kind of real solution for the companies here is roll out more end-to-end encryption, right? Which is on its way for Messenger. I mean, Messenger already has end-to-end encryption that you can click a button.

If that woman had clicked the button, it's not easy and said, I want my Messengers to be encrypted, this would not have been an issue. Although, of course, law enforcement in situation like this always has access to the devices themselves. There's various rules about can they force you to unlock and stuff. So it might not have protected, but really the only thing that companies can do here is keep on rolling out end-to-end encryption. And the other thing this case didn't talk about, which I think is probably actually the riskiest kind of warrant in this situation, is not directly going for content because you have to know this person got an abortion to target them for their content. It's the location data, which is, we've had this history now of law enforcement both at the federal and the local level doing these warrants where it's like, "Give me every phone that was in this distance of this place."

And so if they started doing that of just every day sending a warrant to Google, and really the real targets that are the phone companies, because the phone companies keep location data for a very long time, and their location days actually quite good. If they went to the phone companies and said, "Tell me every person who visited this place in this time," and that happens to be an abortion clinic in the middle of it, then that is going to be the really kind of the dragnet surveillance. And the article didn't cover any of that. So I think it's a legitimate problem. It's just a bad article in that the need to make it like tech companies are bad, it has to be the theme of effectively everything, the immediate rights these days [inaudible 00:40:19] the issue and it's not helpful. And because it does not allow you to have the real conversation about the real possible ways to address this issue.

Speaker 1:

Yeah. Well, that was extremely comprehensive.

Speaker 2:

Sorry.

Speaker 1:

No, no. It was comprehensive is a euphemism. No, no, I think it was great. The only thing I want to do, I want to pull out the example that you gave about gun crimes, because I think that's a really useful thing to do because this is so ideologically and politically charged that for people when they're thinking through this issue, think about to test their own priors about what should a company do in this situation if we're talking about prosecuting crimes under state laws. In one case it might be abortion. In one case, it might be gun crimes, illegal gun sales, those kinds of things. And if you have strong priors about what tech companies should do in response to lawful requests for information, you need to test your intuitions and make sure that they're consistent across different political areas and different issues.

And then again, we're talking about all of the different equities because we have talked about the equities when it comes to encryption. And we've talked about, this comes up a lot in terms of child sexual abuse material and things like that, where we're forcing the equities between privacy and safety as well. And we've talked about where we are on that in general, but this is a good case that highlights how that plays out. We are also seeing [inaudible 00:41:37], I mean, again, this was only in a matter of time. So we have the data request. So one thing, just a note on the data request as well, is that as we talked about last week, Elon Musk has stopped publishing transparency reports of how often he's pushing back against government requests for data. So whether this story was a lot about Facebook and Facebook data, but it would be interesting to see stories about Twitter, especially in the Musk era.

Speaker 2:

Yeah, I mean, I think for abortion it's less likely just because Twitter's used a lot less for private communication, and they have a lot less, I think the most relevant companies are Apple for iCloud backups, because iCloud backups will include a bunch of interesting things for it. Google and Facebook for Messenger contents, Google for search results. So if somebody is logged in while they search Google, Google has a log of that. And there is a history of law enforcement going... We've had these cases where it's like somebody is accused of murder, and then they happen to Google how to get away with murder, or they dropped a pin at the place that the body was found. And that's like, it's pretty indicative.

Speaker 1:

Do not go near this place. Yeah, exactly. Yeah, just remind, right?

Speaker 2:

Right. You have how to bury a body. So if you had women searching for abortion clinics or how to do a home abortion or how to order chemical abortion in the mail, you could see those kinds of requests going to Google. And then Facebook, for the private messages probably not so much public. But again, I think the biggest target here, and we just don't hear about it because it's just not something that reporters write about, is the phone companies. Because the phone companies are the... Four phone companies basically know where every American is at every moment, and they keep those logs for months or years. And so I think we don't talk enough about location data and the incredible power of location data. This was brought up a little bit. There's this Alex Murdaugh case, this murder, big murder case for a lawyer where location data and the logs from his phone and stuff turned out to be a big part of it. So I think it's starting to enter the public consciousness a little bit more, but the location stuff is pretty incredibly powerful.

Speaker 1:

Meanwhile, of course, this is explicitly a content moderation issue. Texas Republicans have introduced a bill in the past week that would require ISPs to block a range of abortion websites that are listed in the bill, and then also would expose platforms to private right of action for hosting or allowing people or content that allow that aids and abets elective abortions. So I mean, this bill hasn't passed yet, so it's not worth going into all of the details except to say that this was inevitable. And it's also completely at odds with something else that Texas is doing, which is passing a bill or has passed a bill and trying to enforce a law that would require companies to stop doing content moderation, although it's not technically at odds, because I guess some of this content that it's trying to prevent here would be illegal under state laws as we were talking about.

But the ethos is at odds. And also not all of it would be illegal. And this is why I was screaming First Amendment when we were talking about Tumna last week and the week before, because that was a case about aiding and abetting terrorism. And it is a First Amendment issue, how much liability we expose both speakers and platforms to liability for hosting certain kinds of information and how broadly you can say that constitutes aiding and abetting a crime, whether it be terrorism, whether it be drug sales, whether it be procuring an abortion. And so it's crazy to me that the First Amendment wasn't mentioned in Tumna. And one of the main reasons why was because yes, it was a case about terrorism, but it was never just a case about terrorism. And what happens in Tumna will also affect the constitutionality and considerations of this bill if it ever passes and comes up or a bill like this that ultimately will definitely pass in some state at some time and come forward to challenge.

Whether you could expose platforms to liability for aiding and abetting crimes based purely on the fact that they hosted, even if they don't have specific notice of the content in question. So massive First Amendment problem, like I said, it hasn't passed, so we'll wait and see. But something to watch and exposing ISP's to liability for hosting certain websites is a massive escalation. And again, extremely not tailored, a tailored response because obviously removing an entire website removes a whole bunch of legal speech as well. And just one quick thing. This had slipped by, I hadn't really seen this.

Google announced a little while back that it would do a civil rights audit, and then the company uploaded the audit without fanfare in an update to the bottom of its human rights page on its website. Thank you to the Washington Post for drawing attention to this. It had a fancy law firm, WilmerHale, that conducted the audit, and I think it just really shows the limits of these kinds of box ticking exercises. It's about 20 pages. The five pages that are on content moderation are all really high level stuff that recount publicly available information about what YouTube and stuff already does, and it includes groundbreaking recommendations. Like, we recommend that for markets with heightened election misinformation concerns, Google ensures that employees have language fluency. So thank you WilmerHale for that advice.

Speaker 2:

Right. That recommendation costs $10,000.

Speaker 1:

That's right.

Speaker 2:

It was just for that full...

Speaker 1:

I would do it for less if you... No, actually, I wouldn't, academic integrity, blah, blah, blah, blah, blah, but my God, in an alternate career, that is the job I want. Content moderation, civil rights auditor, good alleyway.

Speaker 2:

These human rights audits have been created on the demand of external groups that have put pressure on the companies. I never understood what the external groups thought they're going to get because, so an audit to put on my [inaudible 00:47:16] hat, an audit is a test against a baseline. So if you're going to audit something, you have to audit something against some kind of baseline of are you measuring against. So in finances, you have generally accepted accounting principles, and you have all this definition of what is allowed accounting for a public company. And so you can audit versus that baseline. There is no baseline for what is an appropriate human rights response for big tech companies, there's nothing. And so you're basically just having these law firms give their opinion of, I think you're doing good or bad. And if you're getting paid million, I mean, Google's, Wilma Hale's bills got to be, what, $50 million a year?

A hundred, right? If you're paying a law firm, eight, nine figures for law services during the year, I'm going to guess if they're just winging it and they're not measuring against a baseline that they're going to, of course have a couple of suggestions, but overall you're going to turn out great. It's just the, there's no such change of human rights audit. I'll just say that because there's no baseline to measure against. You can have opinions, you can have human rights opinions, but the opinions of the law firms are not in no way... WilmerHale, if I had to assemble the people on the planet who best understood trust and safety in the developing world, I would not pick anybody from WilmerHale, to be honest. And so I think this is just going to continue to be a really expensive marketing exercise. And it's unfortunate that someone like these groups that have really legitimate concerns about tech operations around the world kind of allowed themselves to get pushed in this corner because they asked for something that was really easy for the companies to pay for and that won't make any changes.

Speaker 1:

We've seen this in a number of places. This is not a Google only issue. We've seen it with Facebook's human rights impact assessments as well. They're very similar. And until now, it has been a PR marketing thing. But actually going forward, very shortly, it'll be a regulatory compliance box ticking exercise. When we get to the DSA, the Digital Services Act, coming into force, and we have these systemic risk assessments and then independent audits of what companies are doing. And my concern is exactly as you're saying, Alex, that the people that are going to be best placed to spring up and audit compliance and what content moderation audit compliance function are going to be, these big firms like WilmerHale or like EY or the top four accounting firms who don't have the expertise but can give you that nice little pretty certificate at the end of the thing saying, you know, you have completed an audit and you have complied with DSA functions and we're not actually going to get anything meaningful out of this exercise. So, yeah.

Speaker 2:

But the flip side is the creation of the DSA and such to, I'm not a huge DSA fan, but at least if you have countries passing laws and it creates a structure for this, you will have baselines that are created. I don't see the baseline as being super helpful, but those will actually be audits, right? You'll be able to pay Deloitte or whomever for, or a KPMG for an actual audit against the baseline. Whereas right now this is not, so anyway, it just...

Speaker 1:

I completely agree with that. I actually think that there is some real benefit. I think auditing could be a really great sort of step forward in making sure that these companies are doing what they say on the tin. My concern is just that it depends on who does the audit and what are the industry standards that we develop. And if there's no real sort of incentive for this to all be meaningful because there's a couple of accounting firms shaking hands with a couple of big tech firms and saying, "Yes, we're all doing a really great job here." It's unclear necessarily that that's going to result in anything meaningful. But it could, it's going to depend a lot as everything with the DSA on implementation and enforcement. So we'll have to wait and see.

Speaker 2:

And I also was just complaining about the undemocratic nature of the FTC, just inventing stuff to regulate AI without any kind of... And so at least the GSA was passed by the European Parliament, which European parliamentarians are elected by the people of Europe. The European Commission has a huge democratic deficit. Other parts of the EU are not really democratic, but the European Parliament is. And so at least they have to give the Europeans credit for, they're doing something by people who were elected by their citizens instead of just having kind of unaccountable bureaucrats get press releases by inventing new rules that don't actually address the problem.

Speaker 1:

That's true. Although that's going to be a big gap between the high level principles that the democratically elected parliament enacted and what actually ends up being in enforced in practice.

Speaker 2:

And who's making most of those decisions in the DSA situation. Do you know?

Speaker 1:

It depends. I mean for the systemic, what, VLOPs or VLOPs, that's all going to have a lot of central enforcements. So that is going to be the agency there. But some of these, for the smaller platforms, a lot of it's going to happen at the national level as well again, so we will wait and see lots of open questions. And with that, this has been a long one. I actually emailed you yesterday saying I think it's been a bit of a slow Newsweek, but here we are nearly coming up on an hour, so we will wrap. This has been your moderator.

Speaker 2:

Well I was, as you said, comprehensive. That's going to be our new like, wow, that was comprehensive. Alex, shut up.

Speaker 1:

And imagine if we spoke at normal human rates of speaking. This would've been a two-hour episode. So you are lucky, listeners, that we are comprehensive but quick. So this show is available in all the usual places, including Apple Podcasts and Spotify and show notes are available at law.stanford.edu/moderatedcontent. The episode wouldn't have been possible without the research and editorial assistant of John Perino and our producer is Brian Pelletier. Special thanks also to Justin Fu and Rob Huck.