Moderated Content

New York Attorney General v. Blogging Law Professor re: Online Hate Speech

Episode Summary

In the wake of the Buffalo shooting in May, New York passed a law imposing certain obligations on social media networks regarding "hateful conduct" on their services. It went into effect at the start of December and Eugene Volokh, a professor at UCLA Law who runs a legal blog, is challenging the law as unconstitutional. Evelyn sits down with Eugene and Genevieve Lakier from UChicago Law to discuss.

Episode Transcription

Eugene Volokh:

I like how everybody on this podcast will be speaking with a foreign accent.

Evelyn Douek:

Oh, that's true.

Genevieve Lakier:

That is so true.

Evelyn Douek:

Talking about the First Amendment with three foreign accents.

Genevieve Lakier:

I really like that.

Evelyn Douek:

Hello, and welcome to Moderated Content, podcast content about content moderation, moderated by me, Evelyn Douek. On May 14th this year, an 18-year-old man shot and killed 10 people at a supermarket in Buffalo, New York. He live streamed the crime on the social media platform Twitch for about two minutes before the feed was taken down, and this was enough time for the footage to be captured and shared widely across many platforms. In the months that followed, the New York Attorney General, Letitia James, launched an investigation into the role that online platforms played in that crime and the final report, released in October, found that the shooter had been indoctrinated and radicalized through online platforms where he viewed explicitly racist, bigoted, and violent content.

It was against this background that the New York State also passed a law directed at making platforms more responsible for this kind of content, and one of my guests today has brought a challenge to that law, arguing that it violates the First Amendment. Eugene Volokh is a professor at UCLA Law School and one of the country's most prolific First Amendment scholars. Relevantly, I read an ABA article this week that called you, Eugene, a blogger law professor, so now I know what title I have to aim for in my career. We'll come back to why that's relevant, but for now, thank you very much for joining us, Eugene.

Eugene Volokh:

Thanks so much for having me.

Evelyn Douek:

To help unpack the First Amendment issues raised by this law and Eugene's lawsuit, we also have Genevieve Lakier here, a professor at the University of Chicago Law School and a big First Amendment nerd. Thanks for joining us, Genevieve.

Genevieve Lakier:

Hi, happy to be here.

Evelyn Douek:

So, Eugene, let's start with a general overview of the law, then, because it is quite short. It's called the Social media networks; hateful conduct prohibited law, which is an interesting title for a law when you actually look at what it does, but we will come back to why it's so interesting. But as I read it, it basically has three requirements, the first is to have a certain kind of policy, the second is to have some sort of complaints mechanism, and the third is to respond to complaints. But Eugene, can you beef that out for us a little bit and explain what the obligations are under the law?

Eugene Volokh:

Sure. So, the law, first of all, applies to social media networks, but that includes any service provider, which, for profit-making purposes, operates an internet platform that is designed to enable users to share any content with other users or to make such content available to the public. So, we operate a comment section in our blog, so I think we're covered by this law, because we are, essentially, service providers, we provide a service, and we are doing it for profit making purposes. It's a very, very modest profit, but it is for profit.

Evelyn Douek:

Unless listeners aren't aware, which I can't imagine how they wouldn't be, but the blog in question is The Volokh Conspiracy, which you run and it generally hosts legal blogs and has comments underneath the posts. Is that how you would describe it?

Eugene Volokh:

That's exactly correct. The comments section is designed to enable users, or commenters, to share any content, basically text content, although they could have links, and to make such content available to the public, so I think we're covered. And then the law also is limited to so-called hateful conduct, but conduct is just the law's way of referring to speech. It means use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons based on race, religion, ethnicity, disability, sex, sexual orientation, gender identity, and the like, so it covers particular viewpoints, it deliberately targets particular viewpoints, and viewpoints that are constitutionally protected. Now, some subsets of incitement of violence, if it is intended to and likely to promote imminent lawless conduct, very rare on a social media platform, but they might be unprotected. At the same time, there's no First Amendment exception for speech that vilifies or humiliates or incites violence, outside of that narrow exception set forth by the Brandenburg v. Ohio case.

What's more, a law targeted to a subset of incitement that just basically is bigoted incitement would itself be unconstitutionally viewpoint-based, even if it were limited to unprotected speech, because of a case called R.A.V. v. City of St. Paul, in which the Supreme Court said, "Look, you can ban certain categories of speech," in that case it was fighting words, in this case it would be incitement, "But you can't ban subsets of them based on their being bigoted." So, the law targets certain kinds of viewpoints, including constitutionally protected viewpoints. Now, targets for what? It is true that on its face, despite its title, the law does not seem to actually require us to block any comments. It does require us to have a policy about hateful conduct and it requires us to provide a mechanism for which users can report incidents of hateful conduct, however they define it. Then, it also says, "Shall allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled."

I'm inclined to say that probably means that we have to provide a direct response. Literally, it might sound like all we need to do is allow us to create a direct response, even if we never take advantage of that. But in context, that seems quite unlikely, among other things, because the law also says the policy must include how social media network will respond and address the reports of incidents of hateful conduct. So, it's true, it doesn't, on its face, actually prohibit the hateful conduct, but it is a viewpoint-based law, let's say. It compels us to have a policy which we do not want to have, it apparently compels us to send responses to people's complaints, which I actually try to respond when somebody complains, but I don't want a legal obligation to do it, and at some point, if somebody is repetitive or annoying enough, I just say, "Enough," and I won't respond anymore. The law would apparently require me to speak in that way and again, in a viewpoint-based manner, and I think that makes it unconstitutional, in violation of the First Amendment.

Evelyn Douek:

Okay, great. That's really helpful. I think we'll dig into all of the parts that you mentioned there. I just should note for listeners, the reason why the social media network's definition is important and so broad, it's not just targeting blogs. One of the other parties on your complaints is Rumble, which is the big, what people are calling alt-tech version of YouTube, it's a video hosting social media platform. So, you are on here to talk about it because you're a law professor and you've written this complaint, but it is not just at blogs and things like that, the point of it is this is about social media platforms. In fact, it probably primarily was thought to apply to those kinds of social media platforms, but the point that you are making is it's so broad that it could apply to basically anyone who has a website.

Eugene Volokh:

That is exactly correct, yes. I'm sure the legislators had in line things like Facebook, Twitter, or Rumble and various other places, they probably wanted to include things like Reddit and such, but they defined it, in part because it's hard to see how else one would define it, unless they had a readership threshold or something like that, they defined it to include any place which is set up for people to exchange speech with each other. That includes us, but it's true, I'm sure we're not their main time.

Evelyn Douek:

They weren't coming after Professor Eugene Volokh at at UCLA Law School. So, let's go back to this hateful conduct versus hateful speech thing that you mentioned. So, as you said, it requires social media networks to have a so-called hateful conduct policy and it defines out as the use of a social media network to vilify, humiliate, incite on the basis of a whole bunch of characteristics. That's no mistake, using the word conduct versus speech there, and so, Genevieve, I'm wondering if you could unpack for the listeners a little bit why the law might have used this term as opposed to speech? Because ordinarily when we talk about this, we talk about hateful speech, many of the comments made by the Attorney General were about hate speech online, so what is the law trying to do there and is it at all successful in doing that?

Genevieve Lakier:

I think there are two reasons why it might have used the language of hateful conduct. One is a doctrinal one, it might have been thinking about First Amendment challenges, and then I think there is this broader public relations angle. So, the doctrinal one is that although Eugene is correct that the law targets speech, speech, in the First Amendment cases, sometimes is used in a way that it counts as conduct. So, for example, if you issue a threat against someone, that's not considered a protected speech, that's a form of intimidation, or if you harass someone to the point that they no longer exercise their voting rights, for example, that might be considered to be a violation of their civil rights and that might be considered to be something akin to conduct, rather than protected speech. So, there might have been, I think, a desire to point a court, if there was a First Amendment challenged, the line of cases, including some that have been decided by the Supreme Court, that have said that even when someone acts...

There's a famous case where it involved a sentencing enhancement for a bias crime, a crime that was motivated by racist reasons, and the court said, "Even though perhaps the reason for this crime is to communicate a message about racial superiority, because what is targeted is conduct, something doing harm to another person, the First Amendment doesn't apply." I think surely, New York is trying to suggest that we're not going after viewpoints, despite what Eugene is saying, we're not trying to shut up any particular person, we're just trying to protect users on social media platforms from conduct that's going to intimidate them out of participating or lower their social status in some meaningful way, and so we're no longer talking about speech, we're talking about something else, something that can be regulated. Now, I don't think that's very successful doctrinally. I agree with Eugene that as defined, it's a very broad and loose definition, and so it goes far beyond what I think the courts would consider to be unprotected speech or conduct that is just using speech to carry out its aims.

I guess I wanted to respond to Eugene's argument, though, about how this might be viewpoint-based, which is it's so loosely defined. The statute doesn't provide any kind of intense standard, it doesn't define what it means by... I think the language is, "Vilify and humiliate on the basis of these various of forms of identity, religion, ethnicity, national origin, disability." So, it's true, one might interpret that to be a viewpoint-based standard, but of course, what the law does is it requires the platform, or I guess the blog host, to decide, I think, for itself what that means, how it's going to act in response.

So, you might say that because all of the decision making, including the decision making about what it means to engage in hateful conduct, is left in the hands of the blog host or the social media platform, this isn't the government viewpoint discriminating, this is just the government saying, "Hey, we want you to think about this and to define this arena of bad stuff as you like, and then to provide information to users about how you're defining it and what you're doing about it, and to negotiate this relationship in a kind of publicly accountable, transparent manner."

So, you might think, even if this has what we might think of as viewpoint differential effects, even if we might think that there's a predictable or a likely effect that it's going to have some impact on certain kinds of speech and not others, the First Amendment cases are very clear that just because a law has some kind of differential effect on different viewpoints, that doesn't mean it's viewpoint discriminatory. Viewpoint discrimination occurs when the government itself discriminates against particular viewpoints. I think you could say that this law, it's not doing that and it's not trying to do that.

Evelyn Douek:

So, Eugene, another way to put that might be why don't you just have a policy that says, "I see New York's definition of hateful conduct and New York requires me to have a policy about this and my policy is that I am not going to moderate this conduct or this content and I'm going to leave it up"? Why isn't that a sufficient response to saying, "This doesn't compel you to do anything or to act in a viewpoint discriminatory way"?

Eugene Volokh:

Well, it does compel me to have a policy. It does compel me to speak, to put up a policy on my website. It sounds like it compels me to respond to certain kinds of complaints. Not all complaints, by the way, it doesn't compel me to respond to complaints that some comments are unpatriotic or anti-police or whatever else, but only compels me to respond to comments having to do with these particular viewpoints, so that is a speech compulsion. It's true, it is not as broad a speech restriction as you could imagine or a speech compulsion as you could imagine. It's not as broad as saying, "I have to delete certain comments," but it is indeed a speech compulsion, which is presumptively unconstitutional. I have to disagree with Genevieve on this, I think it is not just viewpoint-based in effect, it is viewpoint-based in purpose and viewpoint-based by its terms.

Here's one way of thinking about it. Imagine that tomorrow, the state of, let's look at a fairly conservative state, let's say the state of Nebraska enacts a law that says, "Hateful conduct prohibited on social media networks," and it defines hateful conduct as the use of a social media network to vilify, humiliate, or incite violence against police officers or members of the United States Armed Forces or Americans, generally. So, there has to be a policy dealing with things that vilify Americans or that vilify police officers or vilify soldiers, sailors, airmen, and the like. I take it we'd agree that that doesn't just happen to have a viewpoint-based effect, it is deliberately aimed at certain kinds of viewpoints, it is aimed at anti-police viewpoints, anti-American viewpoints, anti-military viewpoints.

Now, to be sure, to work on the aimed metaphor, it's probably a BB gun aimed at them, it is not a cannon aimed at them. By hypothesis, this hypothetical law, following the New York law, wouldn't require that these things be deleted, but it would require a policy to deal with that, it would require responses to complaints about that, but complaints about that being those particular viewpoints. Now, if we had a law that said, "Every platform has to have a content policy," or whatever we might want it to have, "And must then provide a mechanism for complaints and for responses," that would still be a speech compulsion and it may in fact still be unconstitutional, it would be a tougher challenge to bring, because that really would be a facially viewpoint neutral law. It would apply to complaints about unpatriotic speech, complaints about racist speech, complaints about blasphemous speech, and whatever else. But that's not this law, this law targets complaints about very particular kinds of views.

Evelyn Douek:

Genevieve, I can see you are desperate to weigh in, so go for it.

Genevieve Lakier:

Well, I'll start with what I agree with Eugene about, which is... I guess for our listeners, it might be helpful to explain that there are two First Amendment doctrines that are on the table by this law. What's great about this law, for First Amendment nerds like me, is that it raises a lot of issues all at the same time. So, we might think that the problem with the viewpoint discriminatory law is different than the problem with the law that is viewpoint neutral, but compels speech, as Eugene suggested. I think particularly when thinking about the large social media platforms, where there is a lot of discussion about all these places of public accommodation, are these common carriers, are these important sites of public expression, we might think that, although the First Amendment ordinarily doesn't allow the government to compel speech of ordinary individuals, there's plenty of cases in which when you are providing an important public service, even if that service involves communication, we're going to sometimes allow the government to compel your speech.

What the court has been clear about is you may not do that in a viewpoint discriminatory manner. When we're talking about the general arena of social media regulation, we might think that compelled speech is on the table, although I know to say that is quite a controversial position, but it is my position. I think that there are, and I think Eugene has the same view, that there are maybe instances and contexts in which, despite the ordinary presumption against compelled speech, we think that it's important enough, this is a arena of public expression, to compel some speech. I don't think there's anyone, really, in the First Amendment world who's saying, "But the government can do that in a viewpoint discriminatory manner," and so I think when thinking about the New York law, it really does matter whether it does compel speech.

Eugene is 100% correct, how can we not think he's correct that it is compelling him to speak, but is it compelling him to speak in a viewpoint discriminatory manner? That ups the stakes, because viewpoint discrimination, the court has set the most egregious kind of regulation. Again, I guess I'm resisting the idea that this is viewpoint discriminatory, or necessarily viewpoint discriminatory. It might be in intent, perhaps, and if something has viewpoint discriminatory purposes, that's problematic. But the state can say, "No, no, no, we're just trying to..." Again, there's an alternative explanation, which is, "This is a really live area, people are really worried about this kind of speech, and so we just want to make platforms and bloggers more publicly accountable for what they're doing." They're problems, the title is a problem, even though we say we're prohibiting it, all we're doing is we're encouraging transparency and accountability and that's why we're not, in fact, imposing any direct obligations.

So, let's just say that that was their purpose, rather than a viewpoint discriminatory purpose, what the state is requiring is not for Eugene to take any particular perspective or viewpoint on the question of hateful conduct or police brutality. Eugene, in response to your hypo about a report instance of police brutality example, I was thinking, well, that provides an opportunity for someone who runs a site where they think that the police are really problematic agents of violence, not recipients of violence, they could have a policy that says, "We do not think that the problem with American discourse right now is underreporting of police brutality, rather the reverse, and therefore, this is our policy, we do not worry about report incidents of police brutality," or whatever it may be. It provides a forum for you to take whatever viewpoint you want on the topic on the table and that has historically been understood in the cases as a subject matter distinction, not a viewpoint distinction. There's no mandating of any particular viewpoint.

The New York law makes me think about this case from a several decades ago called Rowan v. US Post Office, in which there's a law that says, "Homeowners who don't want to get these pornographic or salacious mailers in their mail," they have kids, I don't know, for whatever reason, they just don't want this junk in their mailbox, "Can write to the post office and say, 'Hey, we think that this mailing is inappropriate, it's salacious as we understand it, please put me on the do not receive list.'" It was an early do not call list, no spam alert. The court says, "This isn't viewpoint-based, this isn't really content, based because it's up to the homeowner to decide what they consider to be the bad stuff." I think an analogy could be drawn to the New York law, that it doesn't define the terms and maybe that's a problem, maybe that means it's too vague, but maybe it means that it's all in the hands of the individuals to decide how they're going to define hateful conduct, what they're going to do about it, and therefore, it just doesn't mandate any particular viewpoint.

Eugene Volokh:

Well, but the law does actually define hateful conduct, it says, "Use of a social media network to vilify, humiliate, or incite violence against groups on certain basis." So, as I understand it, if somebody, even if I have a policy that says, "I am not at all interested in this distinction that the New York legislature is drawing, I don't approve of that, maybe I'll delete some things for other reasons, but not for these reasons," let's say I have this policy, again, I will have been compelled to speak and I've been compelled because the government is trying to encourage people to suppress certain views.

But on top of that, on top of having to have this policy, if somebody sends me an email saying, "I demand that you take down this post because it's anti-Scientology, it vilifies and humiliates the class of Scientologists, as defined by religion," it sounds like I would have a legal obligation to respond to that, whereas if somebody sends an email saying, "There's this post that praises Scientology and that's just really awful, you have to take it down," then I wouldn't have an obligation to respond to that. So, even if you set aside the fact that the statute, just by its own terms, is clearly aimed at trying to restrict certain kinds of viewpoints, it's there in the title, it's there in the body, that it's trying to have a policy just for that, even if you set aside the evidence that that's what the government is going for and just focus on what rules it imposes, it requires me to respond to complaints that about certain viewpoints, but doesn't require me to respond to complaints about other viewpoints, it seems to me.

Evelyn Douek:

So, can I pick up on that last point, just because I think it raises this broader question of this moment where we're in, where we have politicians coming out and saying all of these things about social media platforms, because it's part of the politics now. So, this is a platform issue, excuse the pun, where it's part of politician's platforms to rail against social media platforms in some way. We've had this in Florida and Texas, where it's been really against leftist, big tech censors, and now we have it in New York, where we have Democratic politicians railing these negligent platforms that let hate speech run wild. So, I think there's this broader question of how to think about those kinds of statements and that kind of intent when they then pass these laws.

Now, in this particular case, the hateful conduct prohibited did actually make it into the law at some point, but otherwise, there is this question about whether it doesn't actually compel certain speech to be taken down. The same thing in Texas and Florida, where there's all of these statements about being anti-censorship, but when on the face of the law, the concern against conservative censorship isn't written into the law as such and in some ways, the law looks facially neutral, if you were to read it absent knowing all of that background. So, I'm curious, Eugene, how we should think about those extra legislative statements when we're interpreting these laws? If you could unpack for us how much does that matter to you and to the First Amendment analysis when we're thinking about this, if we didn't have all of that background in the New York politics, would that have changed your view of this law?

Eugene Volokh:

That is a great question. One way of thinking about it is let's look at a case from about 35 years ago now, Frisby v. Schultz. Frisby v. Schultz involved a ban on residential picketing and the court upheld that ban, stressing that it was a content neutral ban. In fact, some 10 years before, there was a ban on residential picketing that excluded labor picketing. The court struck down that content-based ban and only upheld this ban on all residential picketing, that is, say, picketing outside of someone's home, only because it was content neutral, so that's a fairly well-established precedent. But let's look at what was going on, as best I can tell, in that particular little town. What happened is there was picketing outside of the home of an abortion provider and the law was then to stop that picketing.

Human beings being what they are, it's probably just human nature that a legislator's more likely to see the problems with the residential picketing when the person being picketed is someone they sympathize with and more likely just to vote against the residential picketing ban when the picketers happen to be ones whom the legislators sympathizes with, in that particular example. Nonetheless, they enacted a content neutral law, triggered by this particular incident, but going forward, applying to all sorts of residential picketing, labor residential picketing or residential picketing criticizing some government official or whatever else. So, the general rule, under the so-called O'Brien rule from an earlier case called US v. O'Brien, is courts are pretty unlikely to look at the floor statements of a few legislators who happen to vote on the bill, partly because there often are a lot of legislators who vote on the bill and the fact that a few of them have a particular view, that's not enough.

Interestingly, there's a case called Cornelius from the mid-1980s, which involved an executive policy, it was a federal government policy having to do with a certain kinds of charitable fundraising in federal workplaces. There, the court suggested, "No, no, we should look at whether the people enacting it were motivated by the viewpoint of the speech. So, maybe there's a difference between executive agency things, which are often implemented by just one person, whoever's the agency head, versus legislative action, which in many situations, is many hundreds of people." So, I think that's a really interesting and difficult question. I do think that if it's written into the text of the law, either in the findings or here in the operative text, again, the law distinguishes certain kinds of speech, labeling it conduct, but certain kinds of speech from other kinds of speech based on the viewpoint being expressed, that's an easy case for it being viewpoint-based.

But you are quite right, there are some interesting situations where we think we know what's motivating the legislators, but on the other hand, maybe the legislators... Maybe in the Frisby v. Schultz case, what happened was nobody really much thought about residential picketing and this incident just brought home to them how bad residential picketing is and they didn't really care whether it was pro- or anti-abortion or anything else, it just happened to be the thing that motivated the enactment of the law. The court is usually willing to give legislators the benefit of the doubt in that kind of situation where the law is facially neutral, but again, this law is far from facially neutral.

Evelyn Douek:

So, if it was facially neutral, if we didn't have all of these statements, if we instead had legislatures saying, "We are tired of not knowing what these rules are on our platforms. I want to post on Volokh Conspiracy and know what Eugene's going to going to do, I don't want to wait to see if I have his mercy," and we don't carve out hateful conduct, but we say, "And we want to know what his rules are, generally," would you think that that raises a First Amendment issue?

Eugene Volokh:

So, it's a great question. I do think it raises a First Amendment question, it may be that it may be justifiable. I definitely agree with the Genevieve that while the court often says, "Oh, speech compulsions are almost never constitutional, they're treated the same way as speech restrictions," they can't be right in all situations. There are certain kinds of disclosures, especially when there's something commercial going on, which are allowed. Now, to be sure, most of these platforms don't sell access to consumers, but you can imagine some kind of thing that's framed as a consumer protection law, before you commit your time and effort to this, you're entitled to know what's allowed and what's not allowed. What if you actually spend a lot of time crafting something and then it's just deleted for arbitrary reasons, is that a fear to you?

I'm not sure that's enough to justify a compulsion, but at least you can imagine that. And again, that would mean that if a platform wants to have a policy against any kind of viewpoint or in favor of any kind of viewpoint, they would have to disclose it. So, I do think that would be a much different kind of case than one where, on its face, this policy requirement is just based on viewpoint, so it's not just a matter of we want to protect commenters of all views from having those views surprisingly to them being blocked.

Genevieve Lakier:

I think the more difficult question, again, I'll agree with Eugene in that that kind of policy would implicate the First Amendment, but I think should be considered, at least as long as it's reasonably tailored and thought through and not overly burdensome, seems to be potentially totally constitutional, and the question should be about design rather than about whether you can do it at all. I think the New York law raises a middle ground question, which is interesting to me. Now, this is of course only if we think that it's not viewpoint-based in the way that Eugene thinks it is. I actually think the fact that the title of the law says viewpoint prohibited and then nothing in the content of the law prohibits anything suggests that the New York state lawyers started talking to the drafters of the law and were like, "There are First Amendment problems here, so let's pull back." So, I don't quite know how to read that, whether that shows viewpoint intent or whether that shows really a realistic appreciation of what the First Amendment allows and doesn't allow and then an effort to still do something, given those constraints.

But leaving aside that, because certainly if I was a lawyer, I'd be very angry that they didn't also follow my advice and change the title, for God's sake. But let's say it's not about viewable and discrimination, it's just about this is a really important area for our state, we know that people in the state care about this, and we want them to have good information about it. So, it's something akin to it's, I agree, it's a little weird, but a subject matter law rather than a viewpoint law. Is that kind of a disclosure obligation okay? Can you say to platforms, not just, "You have to publish your policies, whatever they are, and provide some kind of procedural mechanism for consumers to flag a violation of that policy," can you say, "You also really have to publish your policies on hate speech, misinformation, sexual exploitation, threats and harassment," to make sure that the platforms are thinking about the things that you think it's important for them to be thinking about? Now, when I'm thinking about platform regulation, I immediately go to other arenas of communication and think about what we've done in the past.

It is true then when we're thinking about when we regulate broadcast media, now for doctrinal purposes, broadcast media is in a special category, but I think when we're just thinking about policy, that's just legal hand waving to try and make sense of the current rules. But let's just think from a policy perspective, with broadcast media, there are requirements that they maintain particular transparency obligations with respect to certain kinds of political speech. There are subject matter requirements written into the federal and state broadcast laws, because there is this view that this is the kind of speech that it is particularly important for the public to know what the broadcasters are doing, this is different than other kinds of speech. So, to me, it's just not obvious that a law that imposed disclosure obligations on the platforms and also mandated that they be about certain topics, but doesn't mandate what the platforms do about those topics, it's not obvious to me that they would be unconstitutional and maybe one could think about the New York law as something akin to that.

Eugene Volokh:

But Genevieve, the law specifically targets particular viewpoints, not just all speech on the subject of race, religion, or sex. You could imagine a law says, "You have to have a policy on complaints about speech related to race, religion, or sex." So, if you get a complaint saying, "You guys are just being too kind to Black people," then you have to respond to that and you have to have a policy about. That, you might say is a subject matter category, although... Interesting question, what if there's, for example, during war time, a rule that says, "No demonstrations about the war," and it's quite clear that they're targeting anti-war demonstrations, but they write it broadly, an interesting question. But here, on its face, this has to do with speech that vilifies, humiliates, or incites violence based on race, color, religion, et cetera. So, that means that if it's speech that's sharply critical of particular religious groups or those kinds of viewpoints, there need to be a policy about, but speech that praises religious groups or those kinds of viewpoints, there wouldn't need to be a policy about that.

Genevieve Lakier:

That's right. It's focusing on-

Eugene Volokh:

That's not a subject matter category, that's a viewpoint category.

Genevieve Lakier:

No, it's just focusing the conversation on certain kinds of topics. What is your policy about this kind of speech, not that kind of speech?

Eugene Volokh:

But what's your policy about this set of viewpoints? Again, imagine a policy's-

Genevieve Lakier:

That's, I think, with any policy.

Eugene Volokh:

I'm sorry.

Evelyn Douek:

I see we have different viewpoints on this question, so I'm going to jump in. I do want to note, because I think this is important, just in case any of the listeners had a heart attack, the idea that you could definitely compel platforms to publish their community standards, there seemed to be rough consensus on, maybe in the views that we've heard, but that's not necessarily the majority view in the scholarly community. In fact, I would say maybe that's potentially a minority view. I think a lot of people would say, "Look, we can't compel The New York Times to publish its editorial standards on opinion pages, and so therefore we can't compel all social media platforms to publish all of their community standards. If they want to do it transparently, voluntarily, that's fantastic, we would love that," but this is something that's coming up in the Florida and Texas litigation. Do you think that's probably a fair characterization of the dominant argument here or did any of you either of you want to respond to that argument?

Genevieve Lakier:

Another way to think about what New York is doing, and I agree with Eugene, doing poorly, but maybe trying to do, is treating, clearly, platforms of social media sites as something akin to public utilities or something like that. That is to say, Evelyn, you're right, this is not something we ordinarily would think you could impose on newspapers, so we're putting these class of communicative businesses in a different category. When it comes to public utilities, we require all kinds of disclosures about their consumer policies and individualized response and we impose all these other kinds of regulations and historically, we haven't really done that with communications media, and so, it is a live question whether this is permissible or not permissible. I suppose I'm sympathetic to the view that in some context, it is permissible because, of course, what the social media platforms are doing, what the service they're providing is, is a form for speech and therefore, the primary service they're providing is this opportunity for users to speak.

So, from a consumer welfare perspective, it seems to me the need to have consumers understand the rules that govern their speech, it's just a qualitatively different thing than when it comes to a newspaper, where the primary service is providing a set of opinions. Maybe it's interesting to know what are the editorial standards when it comes to the opinions. In fact, I would love to know more about what The New York Times decision making process is like, I think it'd be very interesting. I miss the, there was The Public Editor and The Ombudsman who did a lot of this disclosure revealing stuff. It's not like The New York Times or other newspapers are always black boxes, they often think it is important for the public to understand these things. But the service they're providing seems, to me, quite different, and so with respect to this class of entities, including The Volokh Conspiracy, in which one of the services that...

I guess the difficulty about a blog post, and I'm curious what Eugene thinks about this, is that it's in between, perhaps, something like a newspaper and something like a genuine social media platform, because the service it's providing is opinion, is a curated product, but then there is also this vigorous comments page, it sounds like it's quite vigorous of The Volokh Conspiracy, and so it's a kind of hybrid beast. But at least when it comes to the true platforms, that looks sufficiently different enough from newspapers that I think we should think about the transparency obligations differently.

Eugene Volokh:

So, there's always a lot to what Genevieve is saying and I think a good deal of it, I would probably agree with. I tend to draw distinctions between platform's, what I call hosting function, and platform's conversation management function. So, for example, I'm inclined to say that a rule requiring platforms to host people regardless of viewpoint, like Twitter would have to allow a Twitter account, even if it doesn't like its viewpoint, Facebook would have to do the same, YouTube or Google running YouTube would have to do the same, might well be constitutional, I wouldn't apply it to newspapers. I do think in that respect, platforms are more like phone companies, to borrow the utility example from Genevieve. So, it's an interesting and controversial question, I'm not completely sure what I just said is correct, but I think that's at least a plausible rule.

However, when platforms create situations where people can speak to each other, which is to say comment sections on a Facebook page or comment sections on my blog posts, one of the really important, and I think constitutionally protected, services that they provide, at least I'm inclined to say that, is the management of this conversation to make it nicer for people. It doesn't really bother me that @realDonaldTrump may be back on Twitter, because I'm not going to see his tweets, unless I follow somebody who follows him, in which case, it's my lookout that I'm following someone who's retweeting him or something like that. On the other hand, if I'm in a comments section and I am reading people's comments, it's probably a good idea to let me, as an individual reader, block things, but still, if there's, for example, a lot of spam, that may make it much harder to understand what's going on. Likewise, if there's a lot of personal insults and such, I do sometimes moderate my comments, usually for personal insults, that poisons the conversation.

So, I'm inclined to say that platforms, when they are managing conversations for the benefit of their users, should have considerable flexibility about what policies they have. So, then the question is, "Fine, exercise that flexibility as you want, just announce those rules." The trouble is those rules are often pretty mushy. What really is sufficiently insulting to be deleted and what isn't? The complaints I usually get aren't, "There's this horrible comment out there, delete it," the complaints are usually, "You deleted my comment." I say, "Well just have a look at it. Isn't it personally insulting and substance-free? Aren't you embarrassed to post it?" Well, those other people posted much worse than I did. Well, okay, I didn't read all of those comments or I thought they may have been different or whatever else, I get to decide, because somebody's got to. So, one problem with not just mandating the publication of a policy, but then insisting that it'd be enforced evenhandedly, is it makes that kind of valuable moderation a lot harder.

In any case, these are all interesting questions. None of these distinctions are things that we are relying on in our case, because we do believe this is a viewpoint-based rule that is justified, quite overtly, by the government's desire to try to pressure people into restricting certain kinds of viewpoints. If we had a different rule that was viewpoint neutral, then that would be something we'd think about and I'm not sure I would have wanted to suit a challenge to that. Maybe I would've just wanted to kibitz as an academic writing this or that because I wasn't completely sure what the right answer was. But this is not that, this is not a consumer protection measure, this is not a measure of just making sure that people understand what they're getting into, because if that's so, then the question is why is it targeting these particular viewpoints and not all these other viewpoints or other kinds of content that people might also be upset about, but they get no protection?

Evelyn Douek:

So, I think your point about the rules being mushy raises an interesting issue that often comes up in regulation of social media and content moderation more generally, which is everything's a little bit mushy and it can be hard to write into legislation exactly what you want or what you mean. So, one of the other responses that you have when you say, "Well, just get social media platforms to post their rule books," is like, "Well, how much specificity do they have to give? Can they just post general rules or do they need to say every single subset?" And then to write that into legislation and say, "Well, here's what we mean by a reasonable rule book," can be really, really hard.

Now, this comes up in your complaint in another way, which is you have a few pages of your complaint where you just go through and say, "This term is not defined, this term is not defined, this term is not defined, user is not defined, this is not defined," which I think is pointing to both legislative laziness, in this case, but also the difficulty sometimes in such a fast moving space or when it comes to social media of saying exactly what it is that you would want that would satisfy the requirements. So, can you talk a little bit about the vagueness issues, why vagueness is a problem, and how to think about that?

Eugene Volokh:

Sure. Let's just look at the clearest examples. So, the policy I have to have as a policy that explains how I deal with things that humiliate a group or a class of persons on the basis of race, color, religion, sexual orientation, sex, gender identity, or gender expression. So, if it had said various things, "Use fighting words against," that wouldn't really apply to online speech generally, but imagine that, fine, there's a body of law that explains what this term means. But to my knowledge, there is no body of law that explains the meaning of the word humiliate, so what's the line? Let's say somebody posts something that really is an attempt to, and maybe a successful attempt, to just show how foolish a certain set of religious beliefs is. Is that humiliating to people who hold that or is it just a plausible argument about why a certain kind of ideology is not a sound ideology?

So, even if we agree that it's offensive and still, even there, of course, a lot of offensive speech is constitutionally protected, but even let's say we agree it's offensive, this presupposes some line between merely offensive and outright humiliating. Likewise with vilify, presumably, vilify doesn't just mean say something negative about, presumably it means something super negative, but how negative? Well, I don't know. Now, you could imagine somebody having such a policy, I could have such a policy for my users and I say, "It's my decision, my property, my website." But if the government imposes legal obligations on me to have policies having to do with vilification and with humiliation, then I would think you'd want to have a definition, and I just don't think there is any such definition in the law.

Again, remember, this law is not purely an exhortation, oh, why isn't everybody nicer to each other? It actually imposes binding obligations on me. I have to have a policy about these particular things and not necessarily about merely material that's offensive or negative. I think I have to respond to complaints about vilification and humiliation, but if it's a complaint about merely something that's offensive, I don't think I have to respond to it. That's a binding law, so it subjects me potentially heavy fines if I don't comply, without really explaining to me what those terms mean, and again, those terms have no preexisting body of law that helps with that explanation.

Genevieve Lakier:

Just responding to that, I suppose I have two thoughts on that. So, one is it's clear that there's a lot that's not explained about this law and ordinarily, from a First Amendment perspective, we think that that's a problem. If a law leaves users in the dark about what exactly it means, we worry about chilling effects, and I think that is a completely legitimate concern here.

Evelyn Douek:

Just to unpack that just a little bit, because if people don't know what they are and are not allowed to do, they will err on the side of caution and not do something, which means that it's not just actual penalties for speech that are the problem, it's this chilling effect, just to unpack that for people a little bit more. Here, you might imagine platforms erring on the side of caution and taking down lots of extra stuff because they don't want to get in trouble with New York's law, even if they wouldn't have by leaving the content up or otherwise, so that's the constitutional concern.

Genevieve Lakier:

Yes, exactly. I think it's especially salient with a law like this, where it says, "We're going to impose fines," so there's going to be monetary penalties. For a small enterprise, those might feel significant. I think the law says every day that you continue to violate the law," as we understand it, it's an additional violation, so if you do it for a week, it's seven times the penalties, 14 days, it's 14 times. So yes, I worry about the chilling effects. On the other hand, we don't know anything. The Supreme Court's been challenged, we don't know how New York is going to enforce the law.

It might be the case that the reason they're not defining the terms is because, as I suggested earlier, they want to leave it up to the host to define the terms, and so what it means to respond to a complaint about this kind of speech is to respond to a complaint about that speech as you define it. Now, of course, if you push that too far, you could imagine gameplaying, where you define it to mean an null set of zero and so you never have to respond to any speech, and so it seems unlikely that the New York Attorney General is going to think that that's a good faith compliance with the law. But it might be the case that for the most part, New York is just going to defer to how the platform wants to define these terms and not impose its own interpretation on them.

So, as long as you publish your policy and you respond to complaints that fall within that policy, you've satisfied the law. In my mind, that's still, from a constitutional perspective, a complicated, difficult heft for New York to justify, because we don't usually allow compelled speech and we don't usually allow the government to impose these kinds of subject matter, maybe, arguably subject matter requirements on platforms. I'm not saying it's out of the constitutional woods, but it's not clear right now that New York is going to impose its own definition of these terms on the platforms. It might, in fact, be deferring to how Eugene or other hosts are defining them.

Evelyn Douek:

I'm curious, Eugene, you mention in your complaint that this law's going to make you reply to every complaint that you get on The Volokh Conspiracy. Technically, this law came into effect, I think, on the 3rd of December and I'm curious, have you started complying with it and how burdensome are you finding it, or are you feeling the chilling effects? What does it feel like to be subject to New York's online conduct, hate speech prohibited, whatever it is, law.

Eugene Volokh:

It's good to be a law professor, it's good to have good pro bono counsel in the shape of the Foundation for Individual Rights and Expression, and it's good to have a pretty good sense that the law is indeed going to be invalidated. So, I'm not worried too much about the fines, because I have confidence in our legal position, but not everybody would, maybe some other bloggers might have a different view. I'm willing to not be chilled, but one concern with laws that have a chilling effect is some people won't be chilled, but others might be chilled. Again, one way of asking, to the hypothetical that I mentioned, is imagine that the law did indeed say, "Vilify, humiliate, incite violence against police officers, members of the military, or Americans," what would our reaction be?

I think the answer would be this isn't a subject matter restriction. It doesn't say old speech about police officers, members of the military, or Americans, it just is particular kinds of highly negative speech about them, that sounds pretty viewpoint-based. Maybe it might be enforced in a very differential way, that is, say, a way the defers to tutorial judgment by the state legislature or by the state government, maybe the very same state whose legislature enacted this law, the executive will say, "Well, but we realize we can't really do much about it. We can't really make it mean much. It's on the books, but we're not going to do much to try to enforce it and collect those fines." Maybe, but I think we'd be inclined to say, "The law is sufficiently clearly unconstitutional on its face, because it targets particular kinds of viewpoints," that's enough to invalidate it.

And then if there were a law that were genuinely content neutral, or maybe subject matter-based, like for example, a law that only applied by analogy to their own view, as Post Office Department case that Genevieve cited, very important case, that only applied to complaints about sexually-themed material that might be viewpoint neutral, but content-based, subject matter-based, maybe it might be different. But for a law like this, whether my hypothetical about the law having to do a vilification and hatred towards police, military, or America, or this particular law, it seems to me it should be pretty clear that it's unconstitutional. Again, that's why I feel pretty confident, I'm willing to run the risk, because it's easy to be brave when you think the other side are shooting blanks. Maybe if I thought the law was more likely to be constitutional, I'd be much more concerned.

Evelyn Douek:

One of the things that I am really grateful to you for in bringing this challenge, I think one of the things that's really useful here in this case, is it forces us to think about intellectual consistency. Because we're putting this back in a broader picture about what's going on in politics and the nation right now around social media regulation, and we talked about a couple of times about Texas and Florida's laws regulating social media platforms that are coming from the opposite side of the political spectrum. I think that it is helpful for people to work through as they're thinking about this, well, if I'm concerned about those laws and not happy about those laws, how do I feel about the New York State law?

I think that your answer to that question should not depend on the politics of the party that has passed the law or the politics of the lawmaker, that there should be some way of reconciling those positions, and if you don't like the viewpoint that one legislature trying to enforce through law that you should have serious questions about the same in a different state. So, what should we be watching here, Eugene, what are the next steps? When will we hear more about the case in New York and find out whether you, the hero, have triumphed in this story?

Eugene Volokh:

Well, at least the initial phase is surprisingly soon. In fact, the judicial process seems to be operating blazingly fast, by the standards of the judicial process, here. We didn't ask for a temporary restraining order, really, a truly emergency order. We asked for a preliminary injunction, which means you don't have to wait for years for a trial, but we got, actually, a pretty prompt briefing schedule. So, we filed our motion for a preliminary injunction on the 6th, the defendant ordered New York to file a response within a week of that, so by the 13th, so by next week, then on the 15th, so a week from today, two days after the response, we have to submit the reply and then there will be the motion hearing on Monday, December 19th. So, we are talking about from filing of our complaint, which we then pretty promptly filed the motion, less than three weeks and from filing of our motion, less than two weeks for the court to hear the case.

Now, of course, the court might hear the case and say, "Well, this is really difficult, I'm going to take it under advisement," and then it could be however long, but it sounds like the court is committed to dealing with this issue quickly, not necessarily because it thinks it's an easy case, but because it thinks it's an important case and it's important to protect free speech rights of everybody, or at least to consider the free speech arguments that may affect everybody in New York or platforms that are subject to New York jurisdiction. So, we're hoping to get a response on this preliminary injunction quite quickly. Now, the decision to grant or deny would then be appealable, so presumably, there's going to be an appeal one way or the other, whoever loses likely will, and that'll take at least several months. But again, by the standards of our judicial system, that is pretty quick.

Evelyn Douek:

That is amazingly fast, so excellent. This podcast might still be going by the time we get a ruling, in which case, we will have to do a follow-up. Thank you both very much for joining us, this was a really good and helpful discussion. As we say around here, everything is content moderation, so who knows what the next episode of moderated content will be about. In the meantime, this podcast is available all the usual places, including Apple Podcasts and Spotify, and transcripts are available at law.stanford.edu/moderatedcontent. This episode was produced by the brilliant Brian Pelletier, special thanks to Alyssa Ashdown, Justin Fu, and Rob Hoffman. See you next week.