Moderated Content

Zoom Rethinks its Approach to Content Moderation

Episode Summary

A little over a year ago, Evelyn interviewed Josh Parecki, Zoom's Head of Trust & Safety and Associate General Counsel, and Josh Kallmer, Zoom's Head of Global Public Policy and Government Relations, about how Zoom thought about content moderation. And since then, they've been doing some rethinking. So Evelyn asked them back to talk about what's changed in the way they think about trust and safety, the change in regulatory landscape even in the last year, and the difficult problems that pop up for every communications platform.

Episode Transcription

Josh Parecki:

You're talking to two avowed nerds for policy, legal, and trust and safety, so we did read your article, Evelyn.

Evelyn Douek:

Oh, boy. All the flattery. This is not going to work, Josh.

Josh Parecki:

We read your article. No.

Evelyn Douek:

That is the way to an academic's heart. I mean, it's like, you and my friend, that's it.

Hello and welcome to Moderated Content, podcast content about content moderation, moderated by me, Evelyn Douek.

Zoom, you may have heard of it. Chances are you have or will use it today, or at least this week. It's become a verb, "Let's Zoom." It's become a sometimes dreaded obligation, "I'll set up a time for us to Zoom." But it's also a communications platform. And as is the universal law of communication platforms, there is trust and safety or content moderation problems. That might surprise you. You might ordinarily think of trust and safety or content moderation as a social media platform problem. But wherever there's content, there's need for some kind of content moderation.

Today, I'm going to talk to two people from Zoom about what is different or similar about trust and safety at Zoom from the context in which we often think and talk about it. Somewhat, ideally, when it comes to doing audio, they're both called Josh. So we are going to call them Trust & Safety Josh and Public Policy Josh to try and make it a little less confusing for our listeners. So, Joshes, I'm going to get you to introduce yourself, tell us your titles, what your job involves so that our listeners could start to hear your voices and differentiate you. Maybe let's start with you, Trust & Safety Josh.

Josh Parecki:

Hey, everybody, I'm Josh Parecki. I'm the head of Trust & Safety at Zoom. I'm also a lawyer, an Associate General Counsel.

Evelyn Douek:

Great. And Public Policy Josh?

Josh Kallmer:

Yeah, Josh Kallmer, head of Global Public Policy and Government Relations, and a recovering lawyer.

Evelyn Douek:

Godspeed with that recovery. Thank you both very much for coming on. In some ways this is kind of part two of a conversation that we had about, I was looking at it, I think 12 to 13 months ago, at the end of 2021. It was when I was at Lawfare doing the Arbiters of Truth podcast with my co-host, Quinta Jurecic. The title of that episode was How Zoom Thinks About Content Moderation. I asked you then, and this is sort of quoting from what I said, something along the lines of, "Why? Why are you getting into the business of moderation?" I said to you, "A lot of the things that are in your terms of use, your Community Standards, include a lot of categories of content that you don't have a legal obligation to take down. So you could in many ways just not do a lot of this."

A lot of the conversations, as we were saying, happen at the application layer. Where Facebook or Twitter or Discord, there's this sense that they have a big responsibility for the content that they host, but that Zoom feels maybe in some ways more like a telephone, or at least it could. It could have been a meeting within Zoom at some point where you could say, "That's how we're going to think about ourselves, much more as a telephone utility or postal service. It's not our job to get our hands involved."

I just, I guess, want to start there by saying, how did you think of ... What was your response then, a little over a year ago, and how are you thinking about it now, about 12 to 13 months later?

Josh Parecki:

That's a great question. Actually, it was that podcast that caused us to think about the question even more deeply. But back then the way we were thinking about it was in reaction to what Zoom was in 2020. Which, if you recall, we had this explosive use. I think you rolled out the statistic about, on one day in March, we had two million signups for the application, and 350 million logins in one day. All the ways that people were using Zoom were so unique and implicated potential content issues more so than we would've expected as a platform in 2019, where we were largely a business to business platform. As a result of that, we were had to start thinking about it a little bit differently. We had to think, "What are our obligations since people are using Zoom so publicly for so many things?" And because we had the phenomenon of meeting disruptions that were coming in. We had to think about, "How do we think about a rule set around addressing these meeting disruptions and the conduct that happens in those meetings?"

But then we had a conversation with you. The pandemic started to change the use of Zoom. As it it ... pandemic sort of died down, people still use Zoom for everything, as you said in your wind up. Business to business, for the most part, or small meetings. So it forced us to think a little bit. In particular, think about how we positioned ourselves. It caused us to actually adjust even the way publicly we represent ourselves in terms of how we enforce ... whether you call it content moderation, because as you always say, everything is content moderation, or just our obligations to make sure that we have a safe platform for people to use. We've actually done a lot of thinking about that. Recently, just, actually, last month, we released a new version of our public-facing safety materials, our new Safety Center, where we tried to bring some clarification around that.

Josh Kallmer:

Could I just add a thought? In addition to what Josh said, alongside our evaluation of how our business was changing and what expectations people brought to it, we were also looking out at the world. We've been in this period of a few years now of governments and societies around the world redefining their relationships with technology. That's continuing and it will always continue. But one of the things that we noticed is that when it came to matters of content moderation, and more broadly, how to treat different kinds of companies at different points in the stack, as we call it, it wasn't straightforward.

Even very sophisticated governments like that of the United States wasn't entirely sure how to treat a lot of companies, including Zoom. So I think it was the combination of that internal reflection as well as this outward ever shifting and sort of uncertain landscape that got us thinking, "We both want to do something that's right for our business, but we also want to inform public thinking and public debate around these issues in a thoughtful way." That has all driven what we've been up to over the last 13 months or so.

Evelyn Douek:

That's great. Every academic dreams of having some sort of academic impact. You don't normally think of it as coming in the form of a half-cocked question on a podcast, but there you go. I'll try and add that to my tenure file at some point. Depending on how it pans out, of course. You could have just been lobbing the responsibility onto me. So we'll see how this goes.

There's a lot to dig into there. But before we do, I want to make it a little bit more concrete for listeners what we're talking about, so that we're not just talking in the abstract about trust and safety the whole time. So that people can think about the kinds of issues that you're confronting and the kinds of things that you talked about.

Josh, you mentioned Zoom bombing, which was the big first explosion of how people maybe started to think more about terms of use or community standards, and we'll come back to the wording in a minute, as you might think about these things. But that's probably only the start of it. I'm curious, can you give us a little bit more flavor, color around the kinds of things that you see on your platform, things that people might not expect, and maybe how you might have seen different patterns over time?

Josh Parecki:

Yeah, that's a great question. It sort of dovetails ... You're talking to two avowed nerds for policy, legal, and trust and safety, so we did read your article, Evelyn.

Evelyn Douek:

Oh, boy. All the flattery. This is not going to work, Josh.

Josh Parecki:

We read your article. No.

Evelyn Douek:

That is the way to an academic's heart. I mean, it's like, you and my friend, that's it.

Josh Parecki:

But one of the points I wanted to sort of emphasize is transparency. The reason I emphasize that is, in 2020, we built our first transparency reports, both for purposes of ... I promise I'm getting somewhere here ... for purposes of both government requests and for purposes of the decisions we make around enforcement of our safety rules. Now we're calling it the AUP. It used to be called the Community Standards. Those are actually really instructive for folks to understand what we see.

What we saw in 2020 was a range of things. You referred to Zoom bombing, we call it meeting disruptions. Just for the listener's clarification, one of the reasons we do that is Zoom wasn't the only place where this was happening. A lot of people were using Zoom, so it took that name, but it was happening all over the place. But in any event. We'd see a range of conduct with respect to meeting disruptions. Anything from just silly to very, very harmful, including folks that would disrupt meetings with things like child sexual abuse material.

Number one, we wanted to fairly address that conduct and take appropriate action depending on what that disruption ... what was associated with that disruption. But even more concretely, we started to see, as I said in the wind up, people were using Zoom to have very public meetings. They had publicized their meeting, invite people to those meetings. We saw that in particular in 2020 around certain meetings that were hosted by universities. Universities or academics would have meetings and invite folks. We'd get a lot of reports related to who they might invite and what the subject matter was associated with those invitations. And we'd have to start making decisions as to what we would or wouldn't do in response to one of those reports.

Because if somebody, for example, said, "You've invited person X who we find deeply offensive or might be associated with a terrorist organization." We had to figure out, "What do we do with it and what's the framework by which we make those decisions? Who do we talk to?" All those sorts of things. Because fundamentally, if we took any action, that is content moderation. We needed to have a clear set of rules by which we would make those decisions.

Evelyn Douek:

Just so listeners are aware, these are not hypotheticals. We talked about this a little on the podcast, but I think you're referencing meetings or events at universities with Leila Khaled. Who participated in airline hijackings 50 years ago and is listed by the US government with an organization ... labeled as a terrorist organization. Zoom's decision at that time was to shut those meetings down as they'd been reported by outside third-party organizations. That was a controversial decision. It was one of, again, Zoom's most controversial moments. Because this was an academic event, part of the point was to discuss these things. Academics often discuss difficult topics with difficult people as part of bringing in the conversation. Did having guidelines help in thinking through those issues? What was your thinking in response to those controversies and how did that pan out?

Josh Parecki:

My response to that is it was a good start. Because what happened after that is we did have a set of rules that we could rely upon in making some of those decisions. But then the question is, and back to the point about transparency, and this is process transparency, it's, not only did we have a set of rules, but how would we think about enforcing those rules? And then in the context of academic, and academic freedom in particular, we went back to the drawing board. This is what we try to do at Zoom, is anytime we make a decision like this, or think about making decisions like this, we try to have conversations about it. Because, and again, this might sound trite or something, but we try to be very humble in how we think about trust and safety at Zoom, or even government policy, as Josh will tell you.

Here what we did was actually invited a whole bunch of academics to a series of round tables. So we could talk about it very openly, and then figure out how we would thread the needle, or what our commitments might be to the academic community when we evaluate these sorts of decisions in the future. Those sorts of discussions actually then created new frameworks by which we make a decision in that context. And then we publicize that, to say, "Here's what we're going to do under this circumstance. Here's how we're going to handle it." We've tried to do that with how we think about trust and safety generally. And we adhere to that, which is another point you make in your article. It's like, you could publish these guidelines, but the second question is you adhere to those. So we're proud that we still adhere to those commitments we've made in those contexts.

Evelyn Douek:

Well, I also read your article in preparation for this podcast, Trust & Safety Josh. You wrote it with your colleagues Karen Maxim and Chanel Cornett in the Journal of Online Trust and Safety. Titled, How to Build a Trust and Safety Team in a Year, A practical Guide from Lessons Learned so far at Zoom. The title says it all, in terms of, you were building the plane while you were flying it, as we said, given how dramatic the rise was. I thought some of the examples ... You quote at the end, "Does a Civil War reenactment violate our weapons policy? How do your rules apply to pole dancing fitness classes? Is that a nipple?" Which I think is some of the less serious content moderation issues that come up. Are those real examples? Are these the things that you've been wrestling with?

Josh Parecki:

Yeah. Oh, yeah, for sure. We had an internal Zoomy who in fact wanted to post Civil War ... He liked to shoot, so he wanted to post a video of him in a safe circumstance, on a gun range, shooting old Civil War weapons. But if you read our Community Standards at the time, it would say, "No, we can't do that." So the question is, "How should we modify it to account for the fact that this might be a type of conduct that's okay, as opposed to other types of use of weaponry?"

Yeah, pole dancing, nipples, all sorts of things come up in the context of people's use of Zoom, because whatever you can think of, people might use Zoom for. With the caveat that, and this is what we get into in the context of our new Safety Center, that whatever people do on Zoom ... It's not like social media because it's not searchable, indexed, you can't amplify it. So it's different. It's much more in these private settings that people share with one another, communicate with one another.

Josh Kallmer:

I think that's actually highly relevant, Evelyn, to the evolution we're going through. It may be just a semantic difference, but I don't think so. I think what we feel like we're grappling with is maybe less about content and more about conduct, and that's where the idea of usage guidelines comes into play. Because as we sharpened our thinking, we just sort of recognized, "Look, people come into contact with our platform in different ways, but a few things tend to be true."

As Josh referenced, we don't have user-generated content. We don't have directories. You can't follow people. Calls are encrypted. Some are end-to-end encrypted. So the relationship that people have with it is just very different than they would with social media, which caused us to think, "Therefore, how we think about enhancing safety has got to be responsive to those different uses, and therefore, a set of use guidelines and a very sophisticated ..." I mean, what Josh and his team have done over the last 13 months to our reporting structure and the kinds of tools that we have the ability to use to enhance safety has been prodigious. They're all directly tailored to the kinds of ways that people interact with the platform, that doesn't happen in a lot of other contexts.

Evelyn Douek:

I see your recovery from being a lawyer is not quite complete, Josh. When you're drawing that distinction between content and conduct, that's a wonderful First Amendment analogy there.

Josh Parecki:

I thought you were going to make fun of him for using the word prodigious.

Josh Kallmer:

Prodigious.

Evelyn Douek:

Oh, also good.

Josh Kallmer:

[inaudible 00:15:59], so I guess I have some ...

Evelyn Douek:

It takes a while to get that out of your system. Okay. I want to talk a little bit more, be more specific about, when you're talking about the change of language and whether it's semantic. You've talked about this new Safety Center that you rolled out. It really is all about drawing the distinction between yourselves and social media platforms. I was having a chuckle because you rolled out these Community Standards in, I think it was late 2020, or sometime in 2020, to say, "We are going to take our responsibility seriously to trust and safety. Here are our new Community Standards."

And now, on December 16th, 2022, in your Community Standards update log, you say, "Okay, we're changing it actually to Acceptable Use Guidelines, away from Community Standards, to better reflect and align Zoom's approach to trust and safety." It's funny to me because, of course, community standards is what we talk about a lot in the social media context. That's where that language comes from. That is literally what Facebook or Meta's policies are called. There's some variation on them on the Twitters and the YouTubes' community guidelines, or whatever it is, but that's kind of the language. That you are now trying to push away from yourselves, and say, "No, no, no, we are using Acceptable Use Guidelines."

I guess my first question is, on December 16th, 2022, did your approach in substance really change? Was there something that you said, "No, we are changing how we substantively approach these issues and this wording is going to reflect that"? Or is it something else? Is there something, just branding? What was the name change intended to achieve there?

Josh Parecki:

Yeah. I mean, it's a great question. Substantively, no, not much has changed. I wouldn't use the phrase branding. I would use it more in the context of what we've talked about before, which is that we're trying to, I think Josh used this phrase, more finely hone our communication around what we do. We felt that even the use of the phrase, community standards, would confuse our users, regulators, et cetera, as to what type of platform we are, for the very reason you said. Like, Meta says community standards, Twitter, what remains of it, says some sort of community standards, but that's not us. If we started to parrot what those sites look like, including the use of their language, then I think there's a danger that our users get confused. And they did.

Real practical examples. Most of our customer base, our enterprise customers. They read our Community Standards and they think, "Well, I don't understand. You're going to enforce your Community Standards in our private corporate environment?" So it did create some confusion. Again, lack of transparency or some opacity about how we might do it. So we very intentionally changed it to try to make sure that we were clearer or more transparent as to what type of platform we are. But, importantly, we haven't changed our approach because our approach is safety. We still want to make sure our platform is a safe place for people to communicate. We can't say, "No, we're not going to do anything to make sure your spaces are safe so you can communicate." We didn't want to intentionally cause confusion with our users and regulators and global community.

Evelyn Douek:

Right. You mentioned regulators, and I want to pick up on that, because I think that is another big thing that has changed in the last, even, year. It's amazing how quickly this is moving. Since the last time we talked, regulators have really ... their ears have pricked up and they're really starting to look at this space much more closely. It's impossible to keep track of everything that's going on. Although, that's basically your job, so my sympathies. A cynical take on this might be, oh, you see the regulation coming down the pipe and you want to distance yourself from social media platforms, and things like that, on purpose, in order to really say to regulators, "Don't include us in this bucket. There's all this regulation coming. This is very onerous. There's all of these transparency obligations. There's all of these things that social media platforms have to do. That's not us. We are not them. Don't include us." I guess the question is, is that a big part of how you're thinking about it and is it working?

Josh Kallmer:

It's funny, I was ... I think I was telling Josh this story a few days ago. About six years ago I was in a prior job working for a trade association representing a lot of global tech companies. It was kind of around the point some tech companies were getting into hot water. The question that was being asked was, "Is tech going to be regulated? What's going to happen? How can we prevent tech from being regulated?" That just struck me as such an odd question, and just not the question at all, because fundamentally, every company exists in a regulatory environment. That's the thinking that informs how we approach this. I mean, we obviously have a ... we represent a commercial enterprise. We want to manage the regulatory environment in a way that supports our innovation and what we're trying to do. But that doesn't mean trying to avoid being regulated.

I think what we are aiming to do is, not only so that we can do right by our customers and users but so we can do right by the rest of the world, is be as sophisticated in our own thinking about who we are and who we are not. Have an exchange with regulators, have an exchange with thought leaders about that. Have a debate about it in some circumstances, and see where that lands from a regulatory perspective, whether it's in the EU, whether it's with the FCC, whether it's in Australia. So I think there is a public interest element to it. There also is a commercial interest element to it. I think we've found that our ability to sharpen our thinking around who we are serves both interests. That's a good thing and we should go out to the world with it as much as we can.

Josh Parecki:

Yeah. Some of the regulations, again, given the nature of our product, which is one of the things that Josh is saying, don't make sense for Zoom and Zoom's customers. That doesn't mean that the intent of a lot of those regulations is not pure and good, but the question is whether they make sense, whether they actually promote the safety and wellbeing, for example, in the context of safety regulation, of our customers.

Josh Kallmer:

I would actually look at it from the other vantage. Josh, if you were here, you can kick me under the table, but I don't think you would. Given the nature of our product, it's natural that regulators ought to lean more heavily in terms of reporting structures and transparency around how you do things rather than on a encrypted platform, an expectation that you're monitoring in real time what people are doing. So it also involves highlighting areas that are more fruitful for regulators to potentially look at.

Evelyn Douek:

Yeah. I mean, I think this is a view that I have a lot of sympathy with, I think as you know, which is that we do need to be thinking about different layers of the internet stack and different kinds of products as very different. It doesn't make sense I think to treat you like a Facebook, but it also doesn't make sense to treat you just like a dumb pipe that just ... like a telephone, because there are all of these different affordances that you can and do provide that make you different. So this is something I think that we do need a lot more nuance in thinking about, in thinking about what your responsibility is and how that should be enshrined in law. Nuance, sadly, and words like internet stack aren't necessarily things that are music to regulators ears in terms of thinking about this.

I'm curious, as all of this regulation is coming down the pipe and you're seeing it, what does it look like for you? Are you seeing that there's going to be very different compliance obligations in say Europe, the UK? Like the massive Digital Services Act in Europe, which is coming in the next few years. And then there's the Online Safety Bill in the UK, which is still working its way through Parliament. That's just to mention two sort of high profile ones, but these are ... You mentioned Australia. There's obviously so many bills in the US, not on the Hill necessarily, but we have state-based legislation coming through. So how much inconsistency is there? In general, are you finding that you are being carved out and thought separately to social media platforms, or are you finding that Zoom is often getting lumped in the same bucket?

Josh Kallmer:

I would say it's hard to generalize at this point, in part because there's such a difference between what the words on the page say, even something as advanced and implemented as the DSA is, and how they come to be enforced over a period of time. So I think time will tell. What I will say is that we feel reasonably good about, it's uneven, but reasonably good about the progress we've made in speaking directly with regulators and getting buy-in to the idea that there are business model ... put aside the reference to the stack, there are business model relevant differences that ought to inform their policy making. It may or may not result in a specific provision in a specific instance that we love, but I think the ... Certainly, our hope is that if we can at least inform the conceptual frame that they're using, that's significant progress.

I will say, one thing that is interesting as well is the momentum from the, let's call it the other end of the stack, the dumb pipes end of the stack. A lot of economies, EU included, India included, arguably Turkey, are looking at essentially telecommunications regulation and taking a very expansive approach to it, at least at this stage. Which creates the risk that we're very mindful of, that they may over-index on a company like Zoom, from the opposite direction, when it comes to lawful intercept requirements, when it comes to universal service obligations, or whatever it is. So we've got it on both sides. I think too early to tell how it's going to shake out, but we feel reasonably good about our ability to inform those conceptual frameworks.

Josh Parecki:

I'll put a plug in for Josh and team right here because, again, as we sort of said at the top, one of the ideas behind updating the Safety Center is to create a powerful tool to engage with regulators. I think we've had a good amount of success in having very candid, very transparent conversations, led by Josh's team, supported by mine, about addressing some of the things that they are concerned about, that are undermining some of these regulatory pushes, whether it be Europe or the UK or the United States. And then talking about all the things that we're ... first of all, how our product works, which is important, and then all of our commitments to safety, to the extent that the regulations are trying to regulate safety.

Evelyn Douek:

Super interesting that you mentioned the other end of the stack, the dumb pipes end. I actually want to put a question to you that I put to you last time and see if your reaction is different, which is, would it make you happy to be declared a dumb pipe in some sense? Because we've talked about how difficult some of these considerations, these questions are. You're spending an inordinate amount of time trying to work out if that's a nipple or if Civil War weapons are something that violates your weapon policy. In some sense, if a must-carry obligation is imposed on you, it makes a lot of that much easier. And you don't have to take necessarily the flack for those decisions because you can say, "Well, look, this is our legal obligation. We have to carry. We are now some sort of universal service provider, in some sense, with obviously illegality, core legal content carved out from that."

In a way, I mean, it might make your job more boring, but it also might make it in some ways easier. So I'm curious how you think about it. Because this is in some ways a very heavy responsibility. When you're thinking about, "Do we want this person associated with a terrorist organization using our service?" On the other hand, academic freedom. I know which side of the line I come down on, but I can see why there's a business consideration there. Something that's very difficult for you in terms of business risk and considerations. Would it make you happier if the regulators just took this out of your hand and said, "We're going to decide for you, just carry this stuff"?

Josh Parecki:

I'll start with just one thing from a trust and safety perspective, because our team also includes the law enforcement response team. I think Josh sort of hinted at this in his response. If we get put more towards the dumb pipe side, it's not like our obligations go away. Maybe some of the thinking on the front end about what we should or could do around safety or proactive safety measures or responding to reports, maybe some of that goes away, but we're probably going to get a net increase in things like requirements to build lawful intercept capabilities into products that never had had that obligation before. Because ultimately, a lot of the dumb pipes, like the telecom companies, are responding to wiretap requests left and right, whether they're in the United States or, in some cases, abroad, if they're doing their business. As Josh said, we see in certain other countries already a little bit of regulatory capture of number independent communication services.

Josh Kallmer:

I think there's certainly days when probably Josh and I, and some of our colleagues, feel like it would be easier to just be declared a common carrier obligated to provide universal service, but that wouldn't be the right place to land. One of the reasons it wouldn't be the right place to land is the idea of what's best for our customers and our users. That's not what they, for the most part, expect. Yes, they are primarily focused on private spaces to conduct secure communications with one another rather than a wide open public space, but there are elements to this that do make us very different than a common carrier. We don't really have significant physical infrastructure.

For example, along with the things that Josh identified, another one is just that our customers and our users expect us to be global. They expect, whether it's a multinational or a small business trying to enter a foreign market or university's doing cross-border classes, that there's going to be a seamless cloud-based experience. That in a way is fundamentally antithetical with what is still and probably will remain national level telecommunications and basic communications regulations. So I think just from the perspective of what is truest to our customers and users and what kind of company we are being at either end, even though it might make life simpler on some days, just is nowhere near the right place to be.

Josh Parecki:

Also, just one more thing. Have you peeled back those telecommunication regulations? You know? There are still-

Evelyn Douek:

[inaudible 00:31:08] some headaches creates others. Yes.

Josh Parecki:

There are still lots and lots of, to quote you, what could be considered content moderation even in the context of telecommunication regulation. That's, between Josh and I, a super long-winded answer to your question. But I think, look, for us it's more important I think to try to establish what we are and then to try to find a path through proactive engagement, to making sure that we show our commitment to our user safety and security and privacy. If regulators want to hold us accountable, we'll take that as long ... I can't set those conditions, but our preference would be so long as it's with the acknowledgement of what our product is and isn't.

Evelyn Douek:

Okay. You mentioned national level boundaries, and also, Josh, earlier you referenced India. In these conversations, I can't help but sort of ask a question about India specifically, but also, more generally, the kind of trend we're seeing in many jurisdictions, which is to use law to force platforms, and again, this conversation mostly happens with respect to the Facebooks and the Twitters and the YouTubes, to take down content that we might otherwise think of as extremely democratically important speech, and of course, a hundred percent legal in the United States.

The big controversy that's playing out right now is the Modi government in India is pressuring platforms to take down a BBC documentary that's critical of Modi. This is just one topical example of a trend that we're seeing around the world. This is a situation where I have a lot of sympathy for people in your chair. It's a lot more fun for me to critique responses on the outside than it is ... This is a situation where you ... if you could, be kicking each other under the chair a lot based on what you say. So I appreciate this is a difficult question.

But how do you think about those obligations when you get legal orders from governments that are compliance with national law, but are the kinds of things that as a free speech academic are certainly things that I really think are critically important to be protected? You could imagine many situations where this comes up in the situation of Zoom. Again, one of the original controversies with Zoom was in exactly this kind of situation, where Zoom canceled services for activists in China and the United States regarding the Tiananmen Square massacre in compliance with Chinese law. I'm curious how you're thinking about that now.

Josh Parecki:

Yeah, I'll start. Josh, feel free to interrupt me or layer, or kick me under the table virtually. It's interesting, I think for us, the muscle that we've been working on building, particularly since June of 2020, is rigorous process. That includes both how we communicate that externally, how we enforce it internally, how people interact with Zoom or must interact with Zoom, if they are in fact going to make a request of Zoom, and then adhering to our standards in addition to evaluating the legal basis of their request.

So we published our government request guide. We took great pains to make it super easy to read. We actually let a bunch of folks look at it to determine whether it was as easy to read as we think it is. It includes, for example, a whole section on what we call withhold access requests. We built a law enforcement response system, which is a system that if a law enforcement agency wants to make a request, they have to use that. We provide training materials in international jurisdictions in terms of how to use that law enforcement response system. And we hold anybody that's submitting a request to the standards of using that system and then we report on it transparently in our transparency report. That has been incredibly important and effective, in whatever jurisdiction around the world you can think of, to hold them just to that standard. And then internally, in terms of how we evaluate those requests, to hold them to high rigorous legal standards. And not be afraid to challenge the request if appropriate, and we've done that in certain jurisdictions.

Evelyn Douek:

Can I just clarify what you mean there?

Josh Parecki:

Yeah.

Evelyn Douek:

When you say, "Incredibly effective in holding them to that standard," what's the material change? Do you see fewer requests coming in, or something, because the requirements are clearer or more rigorous? What do you mean by that?

Josh Parecki:

I should start by saying, we don't get a lot of those requests. Just as a starting point. But to the extent that we have gotten those requests, we say definitively that you must follow our process. What we find sometimes is that they just sort of give up and they don't want to follow our process. If they really, really wanted to get their request done, they would probably follow our process. And then we've had some instances where they follow their process, and we did not find a legal basis for the request and we turned it down. And we were prepared to live with the consequences of that.

Josh Kallmer:

Yeah. I mean, I don't know the exact number, it's probably part of the transparency report, but we have turned down a meaningful number of requests. I think probably even 13 months ago when we were much earlier in the process of doing it, we didn't know how it was going to turn out every time. I think now we feel more confident that this system has been tested, it's been tested procedurally, it's been tested substantively, and it's stood up fairly well. We just have been able to say no or no thank you, or whatever, and continue on without repercussions, which is gratifying.

Josh Parecki:

Yeah. I mean, also, to go back to the top of our conversation. We actually find, even in certain jurisdictions, a certain amount of engagement about what Zoom is and isn't also helps at the threshold level. If a government agency comes to us and they have some expectation of us to do something, sometimes our response is like, "You're barking up the wrong tree. That's not the kind of platform Zoom is. We don't do that," or, "We don't have that capability." That has actually been effective, even in jurisdictions that are a little bit more challenging.

Evelyn Douek:

You've mentioned a couple of times the importance of process. This is dear to my own heart, the importance of process in thinking about these things. Again, we talked about it on the last podcast, but Zoom has, in its time, in its experience, spun up quite a intricate significant process when we're talking about ... Well, I mean, we just talked about the government request process, but also on the other side when it comes to user requests and these community standards dash acceptable use policy standards as well. You have this four-tier review system that you have, where you go through an appeals process and end up at an Appeals panel, which I have more questions about.

But I want to ask if anything about that process has changed since we talked about it in the last year, and whether this pivot away from thinking of yourselves ... pushing out the fact that you are not a social media platform, how that's changed your thoughts around process. Because this is a kind of process that we do see a lot at social media platforms, and indeed, even the Trust & Safety Appeals panel looks in some ways like the Meta oversight board, for example. So, how are you thinking about process now? Has anything around that changed?

Josh Parecki:

Short answer is no. We still use that tier review process, but we've done so much work to be transparent, to educate our users about how our platform works that we just don't have to use it all that often, again, back to the nature of our product, but we have used it. Every time we run into, let's say, a controversial issue that escalates to our Tier IV or our Appeals panel, as we talked about before, we follow a little bit of a docket-based process. Taking from the Supreme Court, which I know is a road we can go down as well. But we do think it's important that we memorialize all of our decisions, who made them, how they were made, what we considered, who we talked to, what data we relied upon to think about the decision we made. But we don't use it very often because our product just doesn't merit it. We just don't get a lot of those types of cases.

Evelyn Douek:

Any consideration thoughts around making any of that public? That sounds like a treasure trove to me. You have this massive appeal system, you've got a docket, you've got memorialized decisions around some of the most difficult decisions that you have to make in your seat. Sounds wonderful. That kind of thinking sounds like stuff that I'd be super interested in, that regulators would be super interested in. Can we see it?

Josh Parecki:

We could talk about that in the future. The only thing I would tell you, Evelyn, is you and/or a regulator or other academics might be disappointed, because there's just not a lot in there. So you'd sort of get it, then you'd be like, "Oh, okay." It's interesting, when you're on the outside, and this goes to some of the articles that you've written and other people have written as well, it's sort of not an academic's fault because you just don't have a lot of data to latch onto to do some of the analysis.

We could give you access to it, but you'd probably find it wasn't all that interesting. It's like, "We got a report from person X that says this thing is going to happen, and then we pulled in a bunch of facts from a bunch of folks, we analyzed it, and then we memorialized our decision." But you'd find it's probably not as rich as say maybe a Meta or a Twitter or otherwise. Even if it was, I see the value in that, because personally, I'm proud of our team and the process that we've built and how rigorous we are following it. So I don't have any compunction with being very transparent that we do it. It's something to be proud of.

And then to a point that you've made in the past, in your articles, we don't just ... I think in your article you talk a lot about the post hoc decision-making process and why that might be flawed in the context of trust and safety, or maybe there's a better way. I agree with that in a lot of different ways. Oftentimes, if we ever have a decision that percolates up to that level, we try to learn from it and apply those learnings to how we think about trust and safety more broadly.

A great example of that, back to the earlier part of the discussion, was with Leila Khaled. That was a super controversial decision. So we took it, we learn from it, we think about how we might enforce it, we integrate new enforcement mechanisms, if, for example, we receive reports with respect to the academic community, and we build and evolve and create a feedback loop. But yeah, that's my only sort of caveat.

Evelyn Douek:

Well, I appreciate you protecting me from the disappointment that I feel. You're really looking out for me and my interest there. Thank goodness.

Yeah, I want to ask you about this anti-design thing. There was an article in the Washington Post just this week, actually, that said that, "Zoom made its product more annoying to use to make you safer." Went through some of the friction and affordances and things that you've implemented to make it harder for someone to just sort of jump onto a meeting and hijack it. Meeting disruption, not Zoom meeting.

Josh Parecki:

Okay. Thank you.

Evelyn Douek:

Yeah. I want to ask you about that and how you think about it. Because as you say, I think that's right. And I think that it does make a lot more sense to be thinking about these as systemic issues and how do you build a product to make people safer rather than trying to clean up messes after the fact. So whether some of this is in part how you've seen your system change over the time that you've been doing it, the different affordances that you've used, and how you're thinking about this going forward.

Josh Parecki:

Yeah. I'm a evangelist for the notion of safety by design. I would say that we could have handled the meeting disruption phenomenon in many different ways in 2020. We could have just created the flat reporting system, dealt customer by customer and made very minimal changes to the product. But we had a commitment up to our CEO in our executive suite that we needed to do something to help fix the problem. This was early on, both Josh and I started it around June, 2020. We were not shy to have conversations with customers. Not just big enterprise customers, even free users. I can't tell you how many calls I took at all times of the day and night, and members of my team took all times of day and night, around how a disruption happened, and why it happened, and what settings they did or didn't use when it happened.

We were able to take all that information in 2020, and leading into 2021, and influence the way the product was designed. The article does say ... There's really two principles of the article. One is we want to build tools that users can use to make their own environment safe. But we can't just say, "It's all on you, users." We have to do something to help them, proactively help them, so we don't just push it all on them. So yeah, we created some default settings that maybe inserted a little bit of friction. But ultimately, it's a relatively low amount of friction for what we're accomplishing, which is allowing our users to be safe. Our users can choose to peel that back. We just want to make sure they understand the safety risks of doing that.

I think this notion of a feedback loop, which is, we have a product out there, we observe some behaviors related to safety, we take definitive note of those and we start thinking about how we might solve that, make that better, without interfering too much with the core benefit of the product. Zoom is so amazing because it works. I mean, it just does. I mean, I know I work there. But it works and it's easy to use. So we did want to be really careful not to interfere with that too much, but we also needed to make sure that folks felt safe using it. So we built the feedback loop, understand how those things were happening, used it to influence the design choices over the course of 2020, 2021, tested, reiterated. We continue to get reports, understand the nature of the reports, interact with the customer, feed it back in, and on and on and on we go.

We would have training meetings with certain customers who were having high profile public setting meetings to say how you might do it. They'd give us feedback, "Oh, that's a little hard. That's a little easy." And then we'd sort of modify and grow as it is. And then we also sort of interacted with a bunch of members of civil society whose constituents were using the product, like for public setting meetings. They acted also as evangelists for the safety settings. So they taught their constituent base how to use the product safely. All these sort of moving parts taken together, this feedback loop, is really what helped, I think, Zoom tame, as the article says, but not eliminate the threat of meeting disruption, it's a evolving threat and we got to stay after it, and really benefited our users. That's safety by design thinking, which I think is so important when you're thinking about safety.

Evelyn Douek:

What am I or the public conversation missing in this conversation as we have it? Is there something in particular I should have asked you about that I haven't asked you about that you ... the reason that you came on this podcast that you want to get off your chest and make sure people are thinking about? Is there something that you feel frustrated about that people aren't paying attention to? What's missing here?

Josh Kallmer:

Yeah, I have one. I'm not sure if it's something that is frustrating, but maybe something that's a little bit surprising that people should be mindful of, which is that I think we, and I'm looking at from the looking out at the world part of this, people make cartoonish distinctions, I think, among different jurisdictions and countries about how they're going to treat these issues. There are certainly instances where governments seem to be taking a pretty blunt and almost coercive approach to a wide range of companies in terms of their obligations to moderate content or conduct and have a relationship with the government.

But another thing that we've found in some markets that you would associate less with being open to free speech and supportive of expression and connection and so forth, is actually a remarkable openness to having this conversation. And to being educated about the kinds of companies that are out there, and what they do and what the expectations are that people have of their products, and what that means for the kind of relationship that the government should therefore have with them. I mean, I will say that Turkey is a country that has been remarkably open to these kinds of discussions.

Now, we'll see where the ultimate regulatory framework lands, but it's been a real education for me and our team. I think that there are very few conversations that aren't worth having. In many cases, a lot of these governments and jurisdictions are just earlier in the stage of developing their thinking around these issues, and they're more open to feedback and nuance than you would expect. We found in some markets, that we're hoping to get more deeply into, that it's not always smooth sailing, but we're having more success in having a real conversation, and again, having a meeting of the minds about the right conceptual framework, even if we sometimes differ on the application of that framework to a specific company or a specific use case. That's something that I just think is interesting and surprising. Not frustrating, but interesting.

Josh Parecki:

Yeah. Totally, obviously, agree with all that. I'll play a small violin for a minute and just say this stuff is hard to get right. So, again, our listeners may not have, or your listeners may not have a lot of sympathy for that position, but it is a hard thing to try to stay ahead of those that would seek to harm people in our user base or customer base, or anybody online. How to stay ahead of that and to constantly think about how to do that, while at the same time preserving folks privacy and fundamental ability to communicate with one another. And then to navigate some of the equities or the interests that Josh described, which is a bunch of regulators that may have good intent, but taking a blunt approach. It's a super hard problem set that I think we're endeavoring to be very ... as transparent as we possibly can around how we're thinking about it, and constantly dedicated towards improving and innovating around it.

I think it's also good to just emphasize another point that Josh said, the power of even having conversations with regulators and our user base, even if it's uncomfortable, is outsized in comparison to the pain you may feel going into that conversation. You just have a chance to learn from other people. Not just our customer and user base, although there's a correlation, but also with civil society groups, academics, and others that just might want to provide their feedback and input. I think we do that. We welcome it. And we think it's had a lot of impact on our approach. But yeah, small violin. Safety is hard, especially in the context of communications.

Evelyn Douek:

Yeah. I mean, I obviously couldn't agree with you more about these issues being really hard. My paycheck kind of relies on the fact that these things are hard and we can't solve them, so let's keep paying her to keep thinking about them. I think engagement governments is one of those particularly tricky issues. Because I think this is one where, on the one hand, you're obviously correct that engagement is necessary. A lot of these governments are coming from a place of not knowing many things, and informing them and educating them can be really influential and positive. On the other hand, of course, I am a free speech scholar, you tell me that communication platforms are meeting with governments, including governments known to issue authoritarian orders around censorship and speech, and my heartbeat raises a little bit in terms of thinking about that. I mean, it's a really, really hard line to walk. of course, I think there's a lot to what you say about the value of these conversations and the importance of them.

Another area where there's all these trade-offs is transparency around that as well. Because my response might be, "Well, okay, that's fantastic, have these conversations, but let's make them as transparent as possible, so that we know what you're saying so we know what they're saying." But then there's as well about how influential and how open and how persuasive you might be able to be. I do wonder whether your experience of those conversations might be very different to someone. This might be an area where the layers of the stack are also very different as well. Because as you say, Zoom just works. Everyone opens it. It kind of would be really annoying if I didn't have Zoom, as much as we joke about it. It's a service, it's a utility that people need, including I'm sure government actors, but the conversations might be very, very different for platforms that are more politically sensitive or otherwise. It's a very difficult trade-off, but I feel like I couldn't keep my credentials as a free speech scholar without noting the different equities, the difficult equities when it comes to engaging with governments as well.

You also mentioned the pain of going into these conversations and needing to be open and transparent. I guess that's just one last question, which is always, why are you talking to me? Why are you doing these conversations? There is some risk. The more you say, the more you open yourself up as a target for criticism, to say, "Oh, look, Zoom is trying to take this approach." From both sides. Either, "Oh, my God, it has acceptable use policies. I didn't know that. That's outrageous. I thought that I could do anything in my meetings." Or, "Oh, my God, they're moving away from community standards towards acceptable use policies. Are they not thinking about their responsibility seriously?" We've had a long conversation, so I don't need to reiterate why neither of those nuanced positions are right. But I guess the question is, why do you do this? Why do you put yourself out there like this?

Josh Parecki:

Well, your question sort of answered the question, in the sense of, if people listen to the podcast and they say, "Why Community Standards and why AUP?" One of the reasons I think it's a good idea to have a conversation with somebody like you, number one, I think you understand some of the nuance and you know how to ask some targeted questions around that nuance, but two is to try to bring clarity for it. Again, we make these changes. We try to be really thoughtful about how we do it, to articulate what our product is, what our product isn't, how we approach safety.

Because fundamentally, we're going to be held to a standard, by governments and our users around the world, that we have a safe product. I'd rather have that conversation in a relatively public setting, or a public setting, to say, "Hey, we're thinking about these things. And we're interested in talking about them. We have some humility around it." Because if we don't do that, then it's just sort of a black box. And you're right, there's some risks having a conversation like this, but I think it's probably a risk worth taking.

Josh Kallmer:

And we get feedback. I mean, whether it's getting feedback from the questions you asked or what your responses are. As we learned from the discussion we had 13 months ago, it shapes our thinking. We think in maybe not a perfect way, but in a pretty healthy way.

Evelyn Douek:

All right. I look forward to our conversation in a year's time then when I discover the ramifications of this conversation. In any event, I really appreciate it. I will send you a check for all of those shout-outs to my articles, Josh. That was fantastic. Really nailed it. It sounded completely natural. Thank you both very much for coming on.

This has been Moderated Content. This show is available in all the usual places, including Apple Podcasts and Spotify, and show notes and transcripts are available at law.stanford.edu/moderatedcontent. This show is produced by Brian Pelletier. Special thanks also to Alyssa Ashdown, Justin Fu, and Rob [inaudible 00:55:00].