Moderated Content

Content Moderation in the Stack

Episode Summary

When we talk about content moderation, we often focus on companies at the application layer of the internet, like the Facebooks and Twitters of the world. But there are a whole bunch of other companies in the internet stack that have the power to knock things offline. So what is similar or different about content moderation when it moves into the infrastructure layers of the internet? Evelyn spoke with Alissa Starzak, the Vice President and Global Head of Public Policy at Cloudflare and Emma Llanso, the Director of CDT’s Free Expression Project to explore this increasingly pressing question.

Episode Transcription

Alissa Starzak:

Going to the other end of it and talking to your ISP and having the ISP say, "No, I have to monitor and make sure every user doesn't have hate speech. That means we can't encrypt any of the content because I need to monitor for hate speech," you have the draconian potential effect of that. Or alternatively the ISP saying, "I'm not going to serve you anymore. You're not going to have access to the internet anymore because you might do something wrong," or, "You have done something wrong before and therefore I'm cutting you off from access to the internet." There's are pretty draconian responses and it sort of reflects the challenges of infrastructure.

Evelyn Douek:

Hello and welcome to Moderated Content, podcast content about content moderation moderated by me Evelyn Douek. Most of the conversation about content moderation is dominated by what happens at the application layer of the internet stack. That is the rules and decisions made by social media platforms like Facebook, Twitter, TikTok, Parler and Truth Social, et cetera. But every so often something explodes that reminds us that there are other companies at different layers in the internet stack that also have the power to knock things offline. 

It happened in the wake of January 6th when the Google Play store, Apple App Store and Amazon's AWS all stop providing services to Parler. Often however, the company at the center of these controversies is CloudFlare, a performance and security service provider for many of the websites that you visit. Three of the biggest controversies about when internet companies in the stack should act to take abhorrent content offline have centered around CloudFlare, each ending in CloudFlare ultimately pulling service from the website in question.

The Daily Stormer in 2017, 8chan in 2019 and most recently around the site Kiwi Farms this year. So to talk through what makes content moderation in the stack similar or different to the content moderation that we often talk about, I pulled together two people who have done a lot of thinking about exactly that. Alissa Starzak is the vice president and global head of public policy at CloudFlare, which is surprisingly not the most stressful job that she's ever had. Prior to this, she was the General Council of the Army. Thank you very much for coming on, Alissa.

Alissa Starzak:

Thank you, Evelyn.

Evelyn Douek:

And Emma Llansó is the director of the Center for Democracy and Technology's Free Expression Project and someone from whom I have learned a lot about the law and policy of freedom of expression online. Internet policy is better because of the work that Emma has done over the years as Public Knowledge recognized this year by awarding her its Information Policy Award. Really excited to get to talk to you again Emma.

Emma Llansó:

Thanks so much for having me on.

Evelyn Douek:

Alissa, let's start with you. I think we should do some stage settings so that the parts of the audience who like me spend a lot of time online but don't necessarily spend a lot of time thinking about all of the businesses and services involved in letting me allow to surf the web. People know about the application layer, as I said, the websites that they visit and they probably know something about the infrastructure layer, that somewhere in the world there's cables in the ground and under the ocean that move bits around. But I know that you have got explaining "the stack" down to an art form now. So can you sort describe some of the surfaces that services that exist between those two layers, the top and the bottom and some of the companies that people might have heard of that are in that and where CloudFlare sits?

Alissa Starzak:

Sure, of course. So I think it's worth understanding that there are lots of them and so I can go in a lot of different directions with the questions of the stack, but I think it's worth starting with a website online. So you mentioned somebody might have a website that is essentially a platform. Well you have somebody has to put that website online. That means getting a domain name for it and it means registering that domain name with a system so it can be found online. So that's sort of step number one. You have to have access to that domain name. So that means you might have a service provider that actually provides access to it. That means you have to store it somewhere. So the content of the website is stored at a hosting provider and that is a site that is actually holds the physical content or the digital content of that website.

And then you have all of the transmission components. So you might actually use a service like CloudFlare's which helps protect against cyber attack. So we sit in front of websites so that if an attack comes to the website, it doesn't take that website down. You might have a certificate for your website, which allows you to have SSL security on your website. You have the transmission components of it, so it goes over wires to both back to the origin and to the user. So there's an ISP angle to transmission of the website.

And then of course you have things like plugins for your website. So maybe you have a donation part of it, maybe you're a nonprofit like CDT and you want to enable people to be able to provide donations to it. Maybe you use a payment processor. There may be all sorts of other things. Maybe you're worried about attacks and you want to have a capture provider. So if you get start getting too many attacks, you want to make sure people, it's not just bots, you're actually using sort of services on top. So there are all sorts of different things that actually go into your access to a website online that most people just don't think about because they generally just work. So people think about the website on the one end and they don't think about all of the providers that he help you get there in the interim.

Evelyn Douek:

Okay, that's really clear and helpful. Thank you. And so I just want to sort of touch on the legal frameworks that govern those companies in between focusing on content for the moment, what they decide in terms of ... because there's all these other legal frameworks, of course, countless including data privacy and things like that. But people know very famously bizarrely this section 230 now famous at the application layer that gives Facebook and platforms like that a lot of discretion in deciding what to take down and leave up. So tell us about the laws that you think about when these controversies arise, about which websites to provide service to. How much discretion do you have in those moments?

Alissa Starzak:

So I think it's worth understanding what the different legal frameworks are to your point. So section 230 people talk about it is a very broad law from the US perspective, but when we think about of different layers of the stack, that's not actually the one we typically think of. We think about things more like the new Digital Services Act in the EU, which matches a copyright law in the US called the Digital Millennium Copyright Act. And that actually has layers in it. 

The predecessor to the Digital Services Act, the DSA, was a law called The eCommerce Directive in the EU also had layers in it. And the layers that you typically see in those laws, you have what's called a mere conduit provider. So those are the ISPs, the people who are actually transmitting over the wires. You have caching providers, those are entities like CloudFlare that provide a set of services that make the transmission more efficient by caching a local copy of content. You have hosting providers which actually store the content definitively. And then now in the new DSA world, the new EU world, you have online platforms even above hosting providers and then very large online platforms. So you have this sort of contingent of different kinds of providers that fall into a liability framework. They have different responsibilities that fall on top of them and different sort of expectations and then different protections from liability depending on the actions they take.

Evelyn Douek:

Anything to add, Emma?

Emma Llansó:

No, I'd agree that it's really helpful to understand that depending on where in the world you are, where in the world an intermediary is, there might be a different liability framework that they have to consider. And so for a company like CloudFlare, Section 230 and the relatively permissive and protective legal framework in the United States is I think probably really helpful to running the business and to understanding and having just a pretty clear idea that in general they will not be held liable for user generated content. 

But we are seeing more complicated legal frameworks getting developed in different countries around the world. And so that's something I think also to have an eye on as you hear about intermediary liability discussions happening, whether it's in the EU or India or Australia or Brazil. Are they going with those kinds of three or four pretty clearly developed senses of different categories of intermediaries? Are there kind of general obligations applying to all intermediaries that might include backbone providers or caching providers or other kind of services that aren't maybe as intuitive as they might be When we think about what we expect from content hosts or those application layer providers, the name brand companies you might be familiar with a Twitter or Facebook or even a Reddit or a Pinterest.

Evelyn Douek:

So talk to me a little bit then, Emma, about what is different about content moderation in the stack. Like, why we are having an episode about this in particular, and not just content moderation generally? So what should or shouldn't we think about when we're talking about the different providers in the stack and should we be thinking about this as a useful tool in our toolbox for thinking about healthy discourse online?

Emma Llansó:

Yes, this is a fundamental question and I can probably assume that listeners of this podcast are pretty familiar with content moderation in general. But just as a quick sort of refresher or reminder for folks who don't spend every single day thinking about this a lot, when we talk about content moderation thinking on the application layer, we're talking about services like social media companies, maybe some messaging providers, people operating web forums, making decisions about specific pieces of content, what posts should stay up, what posts should come down, and maybe more systemic kind of decisions like in general, how do content recommendation algorithms work? Are they taking steps to suppress content that is maybe not violating their policies but not exactly what they want to see on the service? So there are a lot of different kind of tools and features that happen with content moderation at the application layer.

And there's often and a kind of growing trend towards having really pretty extensive and detailed policies about what is the content that's acceptable in this particular service for this particular sense of community or user base, however the service provider might define that. When we look at infrastructure providers, we see often sort of echoes of that system or service or we're starting to see more infrastructure providers start thinking about having things like content policies or acceptable use policies. But these tend to be much more general and high level than the kinds of really in depth policies you might see on a Facebook or a YouTube or a Reddit. 

You have a more general expectation from users that they will be able to use the service as long as they're effectively not breaking the law and users are not really expecting things like the content of every communication that they try to post or that they try to receive through this infrastructure service to be evaluated for that sense of does it violate a rule against hate speech? Is it harassing? Is it maybe disinformation? 

There's a granularity of moderation that happens at the application layer that is just right now not really expected at the infrastructure layer. And my understanding is really difficult to actually implement at the infrastructure layer. Oftentimes when we're talking about different infrastructure providers, the kinds of responses that they could take against content are almost inherently disproportionate. 

So if you think about kind of petitioning a domain name provider to say there is a page or a post or some subset of this website that is violating a policy or even violating the law and it should come down. Often with it, the only thing the domain name provider can do is take down the entire domain. They stop providing service to the entire domain. So potentially taking down a lot of other non-offending, non illegal lawful speech as a way to try to get at that particular content that other people have pointed to and said, "This should not be allowed on your service, you shouldn't be supporting it."

So there's a lot of potential collateral damage from moderation at the infrastructure layer because often the kind of interventions that infrastructure providers can take aren't that tailored, and they also often don't have that same kind of relationship with the users who are posting this content. So one thing that a lot of advocates have argued for around content moderation at the application layer is that services should provide notice to users about what's happening to their content.

Is their content being taken down? Is their account being deactivated? What's the reason for that? A lot of sense that the user is in a relationship with this website or this online service and deserves to get information back from that service provider about what is happening to their content and why. Often with a lot of infrastructure providers, you think about all the different sorts of services that Alissa described of people not even realizing are part of their communication.

There's not that connection directly to the user who might be affected. There might be literally no way or effectively no way for that provider to say, "Hey, you. You were actually the source of this content and for whatever reason we've decided we don't want it on our DDoS mitigation service or as part of our broad global caching network or in our community of websites whose domain names we point to. So that ability to actually inform users about what's happened to their speech is potentially significantly reduced, which makes it much harder to think about how do you provide remedy for over broad removals or blocking of speech and content and what kind of accountability users could ever expect of companies making these decisions if there's just not that relationship to begin with.

Evelyn Douek:

Great. A lot of lawyer buzzwords that I think are really helpful for thinking about proportionality of response. We don't want to knock all of Facebook off because there's one group. I mean actually there probably are some listeners right now that'd be like, "Yes, knock all of Facebook off," because there are a couple of groups, or many, many groups that are doing abhorrent things. But I think generally that idea that would be disproportionate and then this idea of due process. 

Especially because one of the things we see is that a lot of users, if you give them notice of, hey, this content is not appropriate, they may reform their behavior. So in many instances it's not actually that they want to get knocked offline or they will reform and then transparency, which I think is really important and the fact that many people may not have heard of CloudFlare, no offense, Alissa, giving them all of this power to make these really consequential decisions when people may not know who's making those decisions and how they're making it does feel less comfortable. I'm curious how much you think about those concepts when you are sitting there making these decisions, Alissa.

Alissa Starzak:

So we think about those all the time. I actually, I want to go back to something you said right at the very beginning because I don't think we make these decisions all the time. I think one of the things that we really try to do is try to make sure that decisions are made as narrowly as possible. Going back to Emma's point, I think that we really sort of strive for that and we think that they're best made at the individual posting level. 

We think that they're best made by, in that case sort of the website owner, if they're taking user generated content. That's a much more narrow ... they have an ability to remove particular posts. They can be really specific about it. Imagine going to the other end of it and talking to your ISP and having the ISP say, "No, I have to monitor and make sure every user doesn't have hate speech. That means we can't encrypt any of the content because I need to monitor for hate speech."

You have a sort of the draconian potential effect of that. Or alternatively the ISP saying, "I'm not going to serve you anymore, you're not going to have access to the internet anymore because you might do something wrong," or, "You have done something wrong before and therefore I'm cutting you off from access to the internet." There's a pretty draconian responses, and it sort of reflects the challenges of infrastructure. So those are issues we think about all the time. 

I think going back to that point that we don't make these decisions very often, the reason that you know of us in that context is because there are lots of infrastructure providers out there that are just less public than we are. We actually think these are really important conversations to have to really think about because we're not going to come up with better legal frameworks. We're not going to come up with better solutions to some of the challenges that we see online unless we talk about them. And so it's really important to talk about them. So very happy to be here for that.

Evelyn Douek:

All right, so let's talk about them then and get a little bit more concrete in what we're talking about and listeners may be aware of the controversies that we're sort of referencing, but to sort of be really explicit about it. So the first, I think the big moment in this area and CloudFlare sort of bursting onto the scene was in 2017 when CloudFlare pulled service from the website of The Daily Stormer, after they ran a story insulting Heather Heyer, the woman that was murdered at the rally in Charlottesville.

Matthew Prince, CloudFlare's CEO wrote a now pretty famous in these circles anyway, a blog post or letter to staff originally in which he said literally I woke up in a bad mood and decided someone shouldn't be allowed on the internet. No one should have that power. Then in 2019, CloudFlare pulled service from 8chan, a message board that had been the host of advanced announcements of three mass shootings in less than six months.

Prince wrote another blog post saying, "We continue to feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often," which is what we just heard Alissa saying then as well. And then this year CloudFlare pulled service from Kiwi Farms, a hate filled message forum that started as an offshoot of 8chan and has been the instigator of numerous mass harassment campaigns and doxxing attacks, some of which have been linked to multiple suicides.

Matthew wrote another blog post in a sense of deja vu saying that, "In this case the imminent and emergency threat to human life which continues to escalate, causes us to take this action." But before that blog post, Alissa, you and Matthew had written another one saying, and without saying so explicitly, because you didn't name Kiwi Farms, you had really sort strongly suggested that you weren't going to take Kiwi Farms down or weren't going to pull service.

So let's start there. Can you describe your thinking at the time of writing that blog post? It was in the middle of a growing pressure campaign to remove this. I think everyone would agree pretty much truly, or at least all of our listeners would basically agree truly abhorrent website from the internet.

Alissa Starzak:

Yeah, definitely I would agree with that last piece. So I actually think people miss the fact that we had put out a fair amount of content even before that blog post on exactly that point. So after the ... we're providing some history to match what you just said. So after The Daily Stormer issue, one of the things that we really tried to do was have a lot of conversations with people about how to think about the issues that we were facing. And if you looked at Matthew's blog post, our CEO's blog post at the time, what he really said is, "Hey, we need to have conversations." So we started actually having conversations about what it might look like, how you could come up with policies that were a bit more nuanced, where some of the fault lines might be.

And it sort of evolved over time. And so the blog post that we put out during the Kiwi Farms discussion on abuse was actually something that we'd actually put out a website on the year prior. So there was pretty detailed description of it, but people don't always pick things up in the moment. And so there was a reality that had been out there, but people actually weren't talking about it and people weren't sort thinking about the overarching issues. 

I think one thing that we sort of recognized after 8chan in particular was that there might be situations where we might have to take action. And so if you go back and look at that website, one thing that we flagged was that there might be a very narrow possibility of taking action. We didn't talk about that in the abuse blog post, but if you go back on our website it specifically says voluntary action very narrowly circumscribed, areas where somebody might be at risk, potential harm to human rights because somebody might be at risk. That was very much sort the tailoring of it from a termination standpoint. 

I think the challenge for us is trying to figure out where those lines are what those areas exactly look like for us. And that's always been a hard case, and it's not because you don't have sites like Kiwi Farms, which just have despicable content on them. I think everybody agrees with that or certainly all of us agree with that. I think the challenge is understanding that if you say you're going to take action, people will come to you to take action all the time on all sorts of different things. And not that from a long-term strategy standpoint is very challenging. 

I think that the other thing I would emphasize, one of the reasons that we are unusual and have been more public about where we sit in all of this is that we, amongst probably the only one amongst our peers of our kind of providers have a free service. And so the reality is that anybody can sign up online for a set of services that help improve efficiency, help protect them from DDoS attack. 

We think that overall, that the idea that anybody has access to a service, that it's not just the large providers or people who can afford to pay a lot is a really good thing from an overall internet perspective. But it comes with challenges and those challenges often come in horrible websites that also sign up for services, that sign up for free services, but that means that we need lines about how we address the free services piece and that's where things get hard.

Evelyn Douek:

Okay. So you just talked about developing these frameworks and then I'm wondering if you can walk us through what changed then in terms of that framework between August 31st when you wrote this post outlining your approach and saying basically that you're not going to take Kiwi Farms down. And then what happened of four days later on September 3rd when Matthew announced that CloudFlare was pulling service from Kiwi Farms? Because obviously the cynic outside would suggest that maybe that was actually just ... it wasn't a matter of principle, it wasn't a matter of factual change on the ground, it was just that the public pressure had gotten to a point where brand management, reputation management, CloudFlare sort of made the business decision to pull service.

Alissa Starzak:

I think that the question underlying all of this, and this is where you get into, where are we as a society? So the big piece for us has always been that the kinds of services that we were providing, the idea that we were sitting in front of a website to prevent DDoS attack, the consequence of removal of service is that you open up a site to DDoS attack [inaudible 00:22:29]-

Evelyn Douek:

Sorry, for our listeners that don't know, can you just take a second to describe what DDoS attack is just in case?

Alissa Starzak:

Absolutely. So it's a kind of cyber attack, which again is criminal in most places, that takes a site offline by over overloading it with traffic. So if you dump a bunch of fake traffic on a site, it will take a website down just because it can't handle the amount of traffic it's getting so you can no longer get to it. So imagine sort of a freeway where you now have a traffic jam, the legitimate cars who are trying to, well they're all legitimate cars in that context, you dump a bunch of extra cars on there that are just empty cars or sitting on the freeway, you can't get access to the place that you're trying to go because there are too many things in the way. That's essentially what knocking a site offline looks like in the context of a DDoS attack.

So for us, when we sort looked at our set of services, and this is the review that we did, we said, "Okay, if we're hosting services and we can actually do something substantive, which is to remove content, that puts us in a different place." And so we have a different standard for that. But if we are playing a role where the effect of our provision of services is simply to prevent a DDoS attack, the idea that the way you address problematic content is a DDoS attack is itself a problematic issue. 

We need to solve our problems. We need to solve of problematic content in ways that are not themselves criminal. So vigilante justice is probably not the best answer, and that's a belief that we have long had and we continue to have. But what we saw in the context of Kiwi Farms, and this is what came up in those four days, was what do you do when the fact that a website continues to be online poses a physical risk to someone? What does that mean when justice goes too slowly to make sure that that person isn't under threat? 

And I think the pressure on us to think about how to manage that was really, really hard. And I think that's what ended up leading to our decision, the idea of, again, physical threat to people, which is somebody could come under attack because the site continued to be online was what eventually prompted us to take action. So I think the principles are all still there. The challenge is how do you navigate that situation where there is potential real harm to people on the ground? And that's what we saw in the run up.

Evelyn Douek:

Emma, I'm curious for your reactions as you were watching this unfold. One of the reasons why I wanted to get both of you on the podcast together is we were all on a panel together and we were talking about how this Kiwi Farms thing had hopped up and there was a lot of sense in the public discourse of deja vu. Here we go again, we're just sort of back to where we were and the two of you were saying, no, actually there's been a lot of conversation about this in the years in between developing the kinds of frameworks that people have been calling for or that Matthew Prince calls for every so often in his blog posts in the aftermath of these controversies. Emma, maybe you could talk a little bit about those frameworks and what you were thinking as you saw this latest controversy unfold.

Emma Llansó:

Yeah, absolutely. I would say one of the things I was thinking was how hard it is to pick the timing of rolling out something like a framework to have it not happen right alongside any particular big controversy. Because the truth is it takes any company or group of companies or multi-stakeholder effort months if not years to develop a set of policies and principles about, okay, how and in what circumstances do you respond with content removal, with shutting down accounts, with withdrawing service from somebody in a way that will take their speech and their content offline? 

I guess you could spin up one really quickly just kind of on the back of a napkin, but I think we've seen any efforts to do that fail pretty quickly and pretty miserably. So I think there, in addition to individual companies like CloudFlare going through the process that they went through to develop their sort of set of principles, we've also seen groups like the DNS Abuse Framework get together.

So a number of different domain name registries and registrars in, I believe it was 2019, published a framework about how they as private company decision makers in the DNS system were going to think about abuse and their role in addressing abuse and when and in what circumstances they would intervene. I think we're seeing a lot of different companies going through some of these same thought processes because at least right now, at least in the United States, the legal environment is such that they are not in the vast majority of cases, not obligated by law to do anything, but there is increasing scrutiny on what they can voluntarily do and what they can do with the power and the kinds of control that they can exert over online content and online information and a lot more focus from civil society. We saw in the Kiwi Farms the organizing and the very intentional campaigning at CloudFlare as the key focus of the campaign to say you are the linchpin between this site staying online and this site coming down.

A lot more people are getting a lot more savvy about that kind of thing and thinking of infrastructure as a place to go with these questions. And that I think is raising a lot of these kinds of principles and values conversations for these different companies because when you don't have a legal framework that requires you to do anything and under section 230 also shields you from taking action against different kinds of content, then it really does come down to this sense of individual company decision making. 

And I think in talking with different infrastructure companies, there are a lot of different considerations that come into play. Always there's this sense of I hate this stuff, I don't want to have, this is part of my business. But also I think really crucially, what kind of promises can these companies actually make about the sort of content interventions they can actually sustainably and reliably provide?

Because if we think about content hosting for a social media service, we all know that, or probably listeners of this podcast know it's like, "Oh yeah, there's no way for a site like Reddit or a site like Twitter to look at every post before it goes live and make a decision before it gets published. Is this acceptable or not under our rules?" It's even less possible for a lot of infrastructure providers to do that because they're not even necessarily content hosts. They don't even necessarily see the content until after it has already ... They may never see it at all. It may have transmitted across their networks or they may be kind of a conduit for people getting access to that content and they have no sense of what are all of these different kinds of websites or kinds of sources of information that they're connecting to.

So the idea that they could proactively ensure that harmful content, that hate speech, that disinformation, that harassment is not available in their services is in a lot of ways effectively impossible. What kinds of responsive, if they're more in a sense of trying to respond to complaints from users, what kinds of levels of responsiveness can they actually put in place? Thinking about this when Alissa was describing CloudFlare's focus on imminent threats to people's lives and to people's safety and wellbeing, talked with different companies who basically have to set up their own kind of internal investigations unit to be able to do some investigation themselves that you might kind of expect from law enforcement entities to try to determine if what they're looking at is a true threat of violence or a true instance of stalking and manipulation. 

Because any time you see a service set out a set of rules or principles for this is how you could get content taken down, there will be people who need to use that for their own health and safety. And then there will be a lot of people who abuse that to try to silence other people who they don't want to hear from, who they want to see their content taken down in some way. 

So even just articulating the principle, articulating the standard for any company has to be accompanied with some sense of how do you vet the different claims or reports or flags that get sent into you to make sure that they're legitimate? And that is not an easy task, I think especially for infrastructure providers who are that much more distant from the users that they serve.

So a lot of different things for a company to think about when they're trying to think about what framework to put in place. But I think for me a lot of it comes down to what are the actual promises that they could make to their users. Because a lot of times you see a tendency to talk a big game about, "Hate is not allowed on our service," but can they actually implement that in practice? And if not, what kind of unrealistic expectations are they setting up for their users?

Evelyn Douek:

Yeah, there is this really complicated relationship between proactive monitoring and then also reliance on community activism for getting results. So you mentioned some of the problems with relying on groups that mobilize because very often they may not be legitimate campaigns to try and get content taken offline, but on the other hand there is a sense of should the burden be on the victims in these really abhorrent situations like the Kiwi Farm situation where these people are being abused and harassed to mount this massive public pressure campaign to get a form of justice. 

Curious, Alissa, for you to talk a little bit about how you do this in practice. You talked about the very sort small category where CloudFlare is open to taking things down where there's this imminent threat. How much of that is proactive tracking of who your customers are, how much of it is reporting and what channels are there for reporting as opposed to just people tweeting a lot?

Alissa Starzak:

I think there's a really important pieces, and I think one of the thing that sort of undergirds a lot of what Emma said is this idea that you want fair systems, you want mechanisms where you're doing things non-indiscriminately. You have sort of a set of standards that you can then apply, you're applying them evenly, you're applying them fairly. There's very much a sort of sense that goes into that. And I think the challenge from an infrastructure provider standpoint, it's not just that they don't look at content. 

If you think about what happens on a social media platform in general, they are looking at content for lots of other reasons. So they are doing, using algorithms to assess content, they're prioritizing content, they're advertising against content. There's a lot of work that's being done on the content in general. When you get into the infrastructure sphere, that's just not the business. The infrastructure space and certainly CloudFlare space is finding a way to transmit content online, and that's it.

They're not looking at content, we're not doing of proactive assessment of content. And in fact sometimes that would be problematic. When you start getting into the content world, you're now talking about things that are being transmitted online that you want to be encrypted, you want to have it be communications that are sort just transmitting over the wire, right?

So I think from a practical standpoint, we just look different when we get into that proactive monitoring space and proactive monitoring from infrastructure feels very weird. Again, think about it from the ISP level. Do your ISP monitoring everything that goes out from your local internet? Probably the answer is no. So when we see these, I think that the goal for us is to come up with standards that we can apply fairly recognizing that it's not going to be proactive monitoring and recognizing that it's going to have to be responsive to some degree, but making sure that we have abuse mechanisms set up that are relatively easy to use where people can actually ... we can enable acceptance of abuse complaints.

We can make sure that if you believe in this idea that the people closest to the content should be the ones able to respond, we have to be able to get an abuse complaint to those entities as quickly as possible. So we've created a mechanism to do that, to make sure that abuse complaints can actually get to the people who can take action on the content, particularly when we cannot. So if we are sitting in front as a security provider or doing caching on content, we're not going to be able to remove that content online. We know that the people who are submitting abuse complaints probably want that content removed online. So we have to make sure that the people who could remove that content have the ability to do it. And that's what our abuse structure is set up to do from a practical standpoint.

Evelyn Douek:

So how often is this happening away from the public eye then? We know these three big instances where there's been public outcry and the spotlight on CloudFlare that has taken away services from these three websites. Are those the only instances that that's happening or there are other forms where people are using these abuse mechanisms to flag things that CloudFlare does take action?

Alissa Starzak:

So they're very narrow instances, and what we actually tried to do is build more transparency into that. So we actually put out a transparency report which we are continuing to expand. One thing that we started to do is think about those areas where we might want to talk a little bit more. So we tried to be very clear about instances where we took action in our transparency report. So for example, if there is a website dedicated to CSAM we will take action on a site like that.

Evelyn Douek:

So that's a child sexual abuse material.

Alissa Starzak:

That's right. So really horrific content, again in the same idea, harm to people, concepts that fits into that category. We tried to report on those. We're trying to expand that reporting from a transparency angle to really make sure there is a public conversation about the instances where we might be inclined to take action, which again are incredibly narrow on the DDoS protection side.

But we also think it's important to report on areas where maybe we are hosting content, the limited areas where we're hosting content, we want to make sure we're reporting on how many instances we see of those as well and when we take action. So we really worked on building out our transparency report to articulate when we potentially take action the instances, describe the instances where we've been ordered to take action and really kind of talk through a lot more of those.

That doesn't include all instances where we get abuse complaints. Because again, that idea that the idea that we have that it should go to the hosting provider first or the website owner and hosting provider first means that we don't necessarily see what happens to ones that we send on. They could be actioned, but we don't necessarily know because we're not the provider that is best equipped to take that action.

Evelyn Douek:

Emma, what do you think of the transparency reporting in this space? This is your moment to issue a list of demands. How happy are you with the transparency that we get here and what more would you want to see?

Emma Llansó:

While I'm always very encouraging of any steps that a company has taken to do transparency, I always think there can be more. And not facetiously, I mean I really mean it that one of the things that we have seen in the trends around content moderation at the application layer is that over time content moderation and the capabilities of it and what a particular company can do move from being obscure to being very well known and understood by different kinds of state actors and private actors and potentially used in abusive or manipulative sorts of ways. We also see that it can be difficult for companies to think through, how do I be transparent about this? What counts as a government demand? What counts as a informal government request or a request coming in through a quasi-governmental organization or a request coming in from a non-profit organization versus one of my regular users?

There are a lot of different things to try to count and keep track of in transparency reporting and I think infrastructure providers need to be aware about that aspect of engaging in moderation decisions too. Because when you start taking action against content and demonstrating to the world, whether you're as public about it as CloudFlare is or as still trying to operate under a cloak of darkness like many other infrastructure providers are, eventually people notice and eventually people try to figure out how to use your abilities, your technical abilities to control speech and information for their benefit.

If you have a history of transparency reporting, if you have done the internal thinking and conversations and setting up your databases and different information flows to keep track of that information, you will be much better poised to tell people about it and especially to draw attention to the really overbroad government requests that your company does not want to comply with but feels like the government of country X kind of has you over a barrel. So I think understanding of threat modeling around content moderation systems and understanding the role of transparency as trying to not only keep the company accountable to its users about the decisions that it's making, but also make sure that all of these other third party actors like government actors in particular have some kind of public facing accountability for their side of it too is really, really important.

Evelyn Douek:

And I just want to echo your point as well about of gratefulness for the companies that are transparent and do participate in these conversations. I think CloudFlare is doing a good public service by writing these blog posts and appearing on podcasts, which is definitely a public good because this is an important conversation and many companies try and duck it, keep their heads down. This is what I like to refer to as the YouTube problem in that YouTube tries to sit out of a lot of content moderation conversations and it works for them in that we spend a lot of time talking about Facebook and Twitter. And so as much as I'm trying to adopt the hard hitting podcast host approach in this, I am really grateful for the role that you're playing in bringing public awareness to this. 

Talking about trends that we're seeing in this space, one of the ones that I want to talk about as well is whether we're going to see more of this from now on. Not only because we have more public awareness about it, but because one of the dynamics at play here is that as the application layer has gotten more proactive and aggressive in its content moderation, we're seeing a lot of the people that have been deplatformed going and congregating on websites for that purpose, to discuss the things that they were kicked off the Twitters and the Facebooks of the world for. And so we see a lot of these platforms that don't really police themselves popping up. 

One of the things that Matthew said in his blog post about 8chan and the reason why CloudFlare pulled service from 8chan was because it was "lawless" in that it didn't have the kinds of rules that we see on many of the applications. It's one of, like Facebook and Twitter where they have rules that now go into the pages numbering in the tens, double digits easily. Emma, maybe can you talk a little bit about the conversation here, the trends that we're seeing and if you think that this is going to be something where there is more and more pressure on infrastructure providers specifically for this reason?

Emma Llansó:

Yes, it absolutely is going to be an area we see more pressure and that we're actually already seeing that play out in a particular kind of infrastructure provider that you can think of as app stores. So app stores are a relatively small market. We have the Apple app store, the Google Play store, and there are other app stores out there. But when Congress or the European Union or other governments around the world want to focus on who can really control what's happening on all of these many, many apps that exist out there in the ecosystem, they look for what are the more narrow control points and app stores end up being those because they have relationships with the third parties who are making apps and trying to upload them and have a whole set of policies and standards. Some of them are privacy standards, some of them are technical security standards.

Increasingly, we're seeing content policy kinds of standards. Often a requirement for the app to have its own content moderation process, its own set of answers to how will they moderate content through that app. And if you don't have that kind of set of policies and processes in place, then you're not welcome in that app store and you're much, much harder or in some cases impossible to get access to by your users trying to make connection to that app.

And I think all of this goes to the core underlying feature about online communication, which is that these technical intermediaries are much easier potential points of control than actually trying to go out and control all of the potentially illegal speech that speakers are doing across the web. And it's part of why as a lawyer, when we think about the role of intermediary liability laws like Section 230 or the eCommerce Directive and the DSA in Europe, we have thought about them as crucial to protecting speech online because they shield the intermediaries who are the obvious points of focus and control from the liability that would make them very easy to influence on how they use that control over user speech.

But Evelyn, to your question, as we see more of not only the sort of speakers online who are going to continue saying hateful and harassing and abusive things, moving to platforms and content hosts that are also willing to and eager to host that kind of speech, you start kind of bumping back down the chain of intermediaries and you really start seeing the sort of, okay, who's the next most, whether it's responsible or legally controllable actor, what is the actor in the technical chain that enables ultimately the speech to stay online who might be most vulnerable to either a public pressure campaign or legislative or regulatory threats or a CEO who just decides, "I'm not going to stand for this and I don't have to because I get to make the rules about what happens on my service."

So we're seeing, I think paying attention to what's going on with app stores is a important idea if you're trying to understand what's going on in infrastructure space in general. Because that's where we've already seen legislative proposals in congress, discussions in other countries about sort of, hey yeah, how could we use these stores as a kind of control point for speech since it's really hard to go after all of the individual speakers or even all of the individual app makers directly.

Evelyn Douek:

Yeah, I couldn't agree more with that. I think we've seen this play out with Parler and Truth Social where it's the app stores, Apple App Store and Google Play store that have gotten them to institute content moderation policies and enforcement mechanisms because if you want access to users, those are the places that you get to go. So as much as we talk about the EU and Congress and some of the most important regulators of speech online are Apple and Google through their app stores.

Alissa Starzak:

I want to actually jump in on that one a little bit because I think it's really interesting to think about what's happening. There are a lot of competing pressures in tech policy right now and I think one of the challenges on the app store model is if you look at what's what happened in the EU at the same time that the digital services app came out, which was of dealing with the intermediary liability issues that we've been talking about and of responsibilities of platforms and responsibilities of different providers, you also have the Digital Markets Act which was trying to regulate gatekeepers. 

And so you have these competing tensions of how much does a gatekeeper actually own those questions of access or not and what are the pressure on those gatekeepers? And I think you're seeing that pressure in tech policy across the board. Same thing on content moderation. For all of the pressure there is to moderate in one place, the sort of idea that people are being censored comes in even on the US side, you end up with laws on the other side that say, "Okay, well then you can't censor. And therefore we'll have a whole new sort of set of laws that restrict that."

And I think those sort of pressures on tech policy are sort of constant. And I think that from an infrastructure provider standpoint, what we are trying to do is set a of principles where we are not the ... going back to Emma's point about the sort of choke points, we want to set a set of principles that make clear that there are areas you want to target first. So if you are of speech protective, you want to focus as narrowly as possible on the providers that actually that can take narrow action and you always should start there. 

And so from a principle standpoint and the sort of role of transparency, one thing that we have tried to do is sort of set out what principles we think about and that is one of our principles. It's this idea that you should be thoughtful that sort of going to the top of the stack first if you want to be sort of speech protective, that's an important component of both laws that get passed. But also just as a concept from a company standpoint, if someone comes to us and says, "I haven't gone to anybody else, I haven't asked anybody to take action on this content, but they are bad, take action on them."

Our answer back to them is almost universally going to be, "Well, have you asked them to remove the piece of content that you have concerns about? Have you asked them to take some sort of action?" That's got to be the first step. And I think we've been talking about an entity like Kiwi Farms where you've gotten much farther down the line there, you have bad actors potentially. So that model doesn't work. But those are of really important concepts to be transparent about from a company standpoint. 

So the role of transparency is not of just about individual requests, it's also about setting out principles, describing why you do what you do, describing why that might be important, describing why it's not just individual decisions on certain kinds of entities. That it's got to be a bigger. There have to be some frameworks in place that even as a company you start to apply and start thinking about putting out there before you respond to anything, either from a abuse request that comes in or from a government request.

Evelyn Douek:

Okay, great. So that's a good segue when we're talking about government requests, because I want to zoom out a little bit and talk about global issues here because we've been a bit parochial in talking about the West, largely United States perhaps, but also the EU and CloudFlare's a global company. These are global issues when we're talking about infrastructure. And we've referenced it, but I think it's good being explicit about the fact that the equities change and being conscious of what you do in one jurisdiction and the incentives and consequences that can have in other jurisdictions. So I think, Emma, it would be really helpful maybe if you could I guess, lay out how we should think about this globally, what we're seeing in other jurisdictions and why that might impact how we think about content moderation at the infrastructure level.

Emma Llansó:

Yeah, I mean it's a complicated question. To take the easiest angle on it first, I think a lot of our conversation has been in this framework of what are the voluntary decisions that companies are going to make and presuming a framework that allows them to make those decisions and how do we think about making those decisions responsibly in a non arbitrary way, all of that. But I think one of the pressures that we see worldwide is just more and more governments being interested in and willing to regulate all different parts of the technical stack of the internet and to put in place much more significant and unflinching obligations for different kind of intermediaries, including infrastructure providers to do proactive monitoring, which as Alissa was describing for an infrastructure provider, ends up being really inconsistent with things like allowing encrypted communications across your services or just in general not monitoring absolutely everything your users do.

So I think one of the dynamics to be aware of is as we see companies, especially companies based in the US thinking through what to do with all of this power that they have and how to use it voluntarily, governments around the world are also paying attention about like, "Ah, is that's what's technically possible? Is that a kind of power that I could harness for myself, for my government, for my law enforcement authorities to use as a way to actually get some control over this global speech environment that is the internet that was touted as being sort of impervious to individual government censorship and government restriction?"

I think those dynamics and understanding that how companies respond in these kind of voluntary conversations are also probably setting out some of the blueprint for at least regulatory proposals that we will see coming in countries around the world. It's why understanding what the kind of technical capabilities are, putting it in the framework, international human rights law, talking about the proportionality and non arbitrariness of decisions is really important because again, what we've seen in the trend from content moderation at the infrastructure layer is the voluntary conversations become that blueprint for the next set of regulatory discussions.

And I would also flag that I think there is some real kind of technical thinking that will be necessary as we also see a shift in legislation around intermediary liability shifting from the kind of what happens on a case by case basis with individual posts of illegal content or individual accounts, and what are intermediaries liability around those specific individual decisions to much more of a shift around concepts like systemic risk assessment or duties of care that intermediaries might have, which are much more focused on what effectively end up feeling like the business practices of an intermediary.

And in general, how do their systems operate and what are the kinds of things that they do to minimize abuse? I am really concerned about seeing, for example, the idea that in order to take action against abusive content, you need to have some sort of backdoor into encryption as it's used in your service. I could see the wrong set of regulatory conversations going in that direction. There are huge threats on the ability of people to access end-to-end encrypted services happening in countries around the world right now.

So I think as we think through potential changing regulatory structures, we also have to understand that those conversations are shifting and it's going towards these more systemic models. And I think we need to hear a strong articulation both from the kind of technical infrastructure community itself and from digital rights activists about how and why keeping open, relatively unmonitored privacy protective infrastructure services are crucial, even if that means you're taking some of them off of the playing field as far as being places that could intervene in abuse.

Evelyn Douek:

Yeah. Alissa, I'd just like to get your reactions to that about whether that matches your experience of what you are seeing, but also how you think about it when you get legal orders from countries that don't have a First Amendment, that are less rights respecting. I mean, presumably CloudFlare has to obey the law of the countries in which it operates. And so how you think about that and balancing that against your international human rights obligations, which you have made reference to in these blog posts as being important to you.

Alissa Starzak:

Yeah, no, so I agree fully with Emma on all of those pieces. I think going back to that idea, one of the realities is that CloudFlare is in a world ... going back to that legal set, often what you see are legal responsibilities that fall on a local ISP, which is very much a regulated entity within the country, which might have wiretap obligations, might have blocking obligations in many countries. To the other end where you now have application layer and people putting new laws in place on things that are platforms. 

And then you have of hosting providers, but often, at least in the sort of existing legal frameworks, not quite sure what to do with other kinds of intermediaries. And so you have broad standards and laws, which are a lot less clear about what the obligation is. And you typically have two different things that governments potentially want. Maybe they want information about what's passing over networks, so that might be a wire tap or it might be information about a subscriber, so more information there, or they want content removed. 

The interesting thing about where we sit from a legal order standpoint is that we're not really good at providing either one of those. We don't have a lot of information about the entities. We're not generally an email provider or somebody who's got a lot of communication. So that's sort of useful in the surveillance context. And on the other hand, we can't remove content if we're not hosting it. So we're not the best entity for that either. And so I think when we get into those conversations about with governments, we can suggest that they go to the hosting provider if they want to have content removed. We can have a sort of broader set of conversations, but we often get the response that their obligations aren't very clear either.

And so we end up with some interesting conversations, but we are often not the entity that people go to first, at least right now. And I think that that is something that is important for us. Because again, that goes back to that sort of set of principles. We don't believe that we're the best actor in those contexts. If a government came to us and said, "Terminate services," my underlying question would be, well, are they going to DDoS, attack them to have the content removed? Because that would be the answer, and that raises a whole other set of concerns. So I think we start thinking about it in our frameworks that we've built. We think about it from a, what is the consequence going to be? What would happen based on the order? Is there someone better positioned? We go through our own analysis of how to respond to those situations.

But it's challenging, and I think people don't often understand the point that Emma raised before, which is that governments absolutely are looking at playbooks and this strange world of the internet is that sometimes playbooks come from people who are trying to do good things online, they want to get removed bad content, but that can actually ... those same playbooks can get misused and they can get used against people in ways that they are not foreseeing down the road. And that's a big challenge from a company standpoint. Trying to figure out how you talk about those and balance those and address the concerns that might come up from going down a road that is probably not long term good for anybody, honestly.

Evelyn Douek:

Okay. So that's leads nicely to the last closing question, which is this has been a little bit of a pessimistic podcast or we've been raising lots of red flags and apocalyptic predictions about what might happen. It has now been about five years since The Daily Stormer incident, which it's funny because in Matthew's post about that he'd said that one of his employees when they found out that Matthew that they were pulling service said, "Is this the day that the internet dies?" Well, spoilers, the internet is still alive. 

And so I'm curious for how much this is raising alarms because it's in a necessary conversation to have and how much of this is, you actually think that we are at a very dangerous moment. And so I guess it's always dangerous to us for predictions, but where do you think we will be five years from now in this conversation? Will we still be sort playing whack-a-mole with ad hoc random websites as they make their way into the news? Or do you think some of these conversations that have been happening, this work that's being done by yourselves and many other hardworking activists and thinkers in this space, how much further along are we going to be in this conversation? Emma, why don't we start with you?

Emma Llansó:

There's a lot in that question. One, on the idea that just because we haven't seen a total internet blackout means that we haven't lost some important things about the internet. In that, I always go back to actually the passage of FOSTA-SESTA in the US as being a really crucial example of that broke the internet.

Evelyn Douek:

So just to explain really quickly what FOSTA-SESTA is.

Emma Llansó:

So this was the law that was allegedly about trying to control the sex trafficking of children online and was the first time and only time thus far that section 230 has been amended in the United States. But the practical experience of that law, in part because of how broadly and we would argue unconstitutionally broadly it and vaguely it was written, is that across so many different sites and online services, financial payment processors, application layer, and other kinds of infrastructure fighters, we have seen people who are involved in sex work, in advocacy for the rights and health and safety of sex workers and people just talking about sex and sexuality concepts and topics generally find it much harder to operate online.

FOSTA-SESTA broke the internet for people talking about sex and sexuality in a pretty significant way. And that's the kind of thing that I worry we are setting up for in infrastructure moderation as well, that it's going to be, I'm sure kind of the major name brand social media services will continue trucking along. Although asterisk, I have no idea what's going to happen with Twitter. 

But otherwise a lot of different parts of the internet, and I'm sure most of the parts of the internet that are trying to sell you something will probably figure out some way to keep going under a variety of different regulatory regimes that we could get. What I really care about are the people who are going to be pushed further and further to the margins, who are going to be considered effectively the acceptable losses of having to have more aggressive or intentionally over broad responses to different content moderation demands all up and down the stack. It's going to be the activists and the people fighting for the rights of people already on the fringe of society who are going to find the hardest times to operate. And so that's what I really am concerned about in all of this.

I think there's always got to be room for optimism. How else do you get out of bed at the beginning of the day? I think the [inaudible 01:00:36] is that there has been so much organizing, for example, in the sex worker community before the passage of FOSTA-SESTA and all the way through. I think the internet is still an amazing tool for getting awareness out, beginning organizing, getting campaigns going and trying to make sure that we can pay attention to all of the different legislative and regulatory processes that are happening and make our best fight possible. That those respect international human rights standards are narrowly tailored and proportionate and actually take into consideration the overbroad impacts that some of these proposals could have.

Alissa Starzak:

The thing that I would also add to it is sort of the reality of the government piece of that. So I think we have seen a pretty significant splintering of the internet. So I think we have, as governments try to look at control of the internets within their own borders. I think you're going to continue to see some of that. I think that's a long term challenge as well. And I think trying to figure out how we come up with something that is a more global solution that feels like it solves some of the problems is actually an important component to coming up with some optimism there. And I do actually look at some of the recent initiatives from the US government looking at the things that are trying to address splintering of the internet, things that are trying to address more sort of coherent regulatory frameworks, putting in some principles.

I think those are grounds for optimism. The other thing I would say on the grounds for optimism side, I think there has been a really interesting movement where people are thinking about things that they can do to take back their own infrastructure. And that sounds funny, but thinking about being able to build their own communities. So maybe it's not that you have something that's controllable by someone else as easily, you have things that look like tools where people can manage the concerns that come up.

So we from a company standpoint, one of the reasons ... we talked about the fact that we have free services, one of the reasons that's so important for us is because it enables a set of tools for people who might not otherwise have access to them. So this idea of building ways where people can decide what's important in their community, where we have a CSAM scanning tool, so a tool that actually looks for hashes of known child sexual abuse material, for example, where you can build those things into your own platform and decide how you want to address. I think there is some optimism about new ways of looking at being online and new ways of connecting that might look a little bit different than what we see right now, that hopefully will start to pop up as we move further down the road.

I think it's going to be challenging though. I think that we all have an important role to play in thinking about what good solutions look like, and I think we really should be thinking about those because the negatives of one-off answers are pretty significant from a global internet perspective. So again, I think there's some pessimism there, but I want to end up on an optimistic note too.

Evelyn Douek:

I appreciate that, both of you injecting some optimism into this. We'll have to have you back in five years to see whether we're looking at a apocalyptic hellscape of an internet or something with puppies and rainbows, or most likely probably somewhere in between. Thank you so much for your time today.

Alissa Starzak:

Thank you for having us.

Evelyn Douek:

This episode of Moderated Content wouldn't be possible without the research and editorial assistance of John Perrino, amazing policy analyst at the Stanford Internet Observatory. It is produced by the marvelous Brian Pelitier. Special thanks also to Alissa Ashdown, Justin Fu and Rob Huffman. Show notes are available at our Stanford website.