Moderated Content

MC Weekly Update 12/27: Trust and Safety Does Not Take Holidays

Episode Summary

Alex and Evelyn sweat through the holidays to make sure you get your critical trust and safety news, including Congressional action on platform transparency (cameo by Nate Persily); TikTok and LastPass data breaches (yikes!); and, of course, Twitter mayhem (sigh).

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Extra special thanks this week to the production team, Brian Pelletier, Alyssa Ashdown and Ryan Roberts for making sure this reached you during winter shutdown.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Evelyn Douek:               I might finally be able to do the Twitter files thing, although I won't attempt it because, I don't know if you've seen air-

Alex Stamos:                 Because you have a cough.

Evelyn Douek:               ... air quality... No, the air quality in Palo Alto is terrible, because everyone's burning like winter fires and so it's very smokey. I've finally been like smoking that pack a day for you, Alex, so that I can-

Alex Stamos:                 Let's hear it.

Evelyn Douek:               ... sub in. But those Twitter files.

Alex Stamos:                 So you've got to closer to the mic.

Evelyn Douek:               The Twitter files. How's that?

Alex Stamos:                 Oh, that is excellent.

Evelyn Douek:               I learn from the best. I learn from the best.

                                    Welcome to Moderated Content's weekly news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. We are dropping an episode into your feed today, because the world of trust and safety doesn't take the holidays off and so neither do we. Super commitment here. Where are you calling in from Alex?

Alex Stamos:                 I'm calling in from beautiful Lake Tahoe, California, where I'm watching my kids throw snowballs at each other, while I'm inside podcasting. So Father of the Year award, yes.

Evelyn Douek:               Exactly. Priorities. Okay. So the first story that we're bringing to you today is some legislative action on the Hill. A bipartisan group of senators introduced the Platform Accountability and Transparency Act, or PATA, into Congress. They had released a draft last year, so a lot of us knew that this bill had been floating around, but it's been finally formally introduced.

                                    John Perrino of our very own Stanford Internet Observatory, has a good summary of the changes in the bill since the draft was released at Tech Policy Press, which we will link to in the show notes. But for a high level take and summary, we did speak to our colleague Nate Persily. He's a Stanford law professor and co-director of the Cyber Policy Center, so sort of both of our boss in some sense.

Alex Stamos:                 Right. Anybody has a boss in academia, he's one of my five bosses. Yes.

Evelyn Douek:               Exactly. He can't fire me, but I still defer him.

Alex Stamos:                 When he gets mad at me. I do feel sad even though, yes...

Evelyn Douek:               Exactly. I pay slight attention to his preferences. So he's been working hard on platform transparency for years now, and the bill incorporates recommendations from his testimony before a senate committee last year and draft legislation that he wrote up and proposed. So he's a really great person to talk about this, and here is his summary set to the dulcet tones of a balcony in Boynton Beach, Florida. Enjoy.

Nate Persily:                 On December 21st, 2022, the Platform Accountability and Transparency Act was introduced by a bipartisan group of senators. Senator Chris Coons, Senator Rob Portman, Senator Amy Klobuchar, and Senator Bill Cassidy introduced what is now known as PATA to provide for greater transparency for the internet platforms. It would accomplish this in three principle ways.

                                    The first is that it would provide for a safe, secure, privacy protected pathway for researcher access to platform data as supervised and administered by the National Science Foundation and the Federal Trade Commission. Second, it would immunize researchers who want to scrape publicly available data from these platforms, so they'd be immunized from civil and criminal liability.

                                    And the third is that it would impose certain public disclosure obligations on the platforms, in three principle areas, first related to widely viewed content, so we would know what kind of content is most popular on the platforms. Second, with respect to advertising, so we could know more about who's advertising, who's spending money on these platforms, and how they're targeting certain users. And third, it would require greater transparency related to algorithms, so that we could get better understanding of, say, the ingredients and the algorithms that produce the Facebook or Twitter news feeds, or, say, the YouTube recommendation algorithm.

                                    Let me spend a moment just saying why I think we need this legislation. We cannot continue to live in a world where platforms like Facebook and Google know everything about us and we know next to nothing about them. The first step toward sensible regulation of these platforms is to understand what's actually going on these platforms.

                                    Republicans and Democrats disagree about what the problem is when it comes to the internet and these platforms, but they're united in their sort of hatred for big tech these days. But like I said, they have different problems that they focus on. So Republicans tend to think that the problems are, say, ideological bias in the content moderation systems or in the algorithms. Democrats tend to worry that the platforms are awash in disinformation, hate speech, and incitement.

                                    Whatever might be the sort of reality behind these allegations, the first step in trying to grapple with them is to really understand the data that the platforms themselves have. And right now it's only those who are economically beholden to the platforms who have access to that data. And so the PATA would make some of this data publicly available and some of it more easily accessible to outside researchers.

                                    So what's the chance of, say, passage of PATA? I think it fortunately is a truly bipartisan piece of legislation, which is I think rare when it comes to tech regulation. My guess is that it's most likely to be passed as part of an omnibus tech regulation package or attached to another bill, say, something related to protecting children online.

                                    But just on a personal note, I'll say that I've been working for at least five years on the problem of transparency and data access to these platforms. And we can no longer rely on the kindness of platforms to make data available to outsiders. We need to figure out a way for the public to get greater understanding of what's happening on the platforms, and for some researchers who are not working for the companies to have access to the same kind of data that the internal researchers do.

                                    And so these big platforms like Facebook and Google and TikTok and Twitter, I think, have lost the right to secrecy that most corporations have. And that whether you think that they are the new public square or whether you think they're just really powerful corporations that are sort of dominating the information ecosystem, the first step toward regulating them is understanding what's going on. And so we need to pass some kind of transparency legislation as a first step toward regulating them in the public interest.

Evelyn Douek:               So I asked Nate exactly how likely he thought the law was to pass, and he put it at 37%. I would've put it more at like 35.3, but he's the expert, so we should defer to his calculation there.

Alex Stamos:                 I'll take the under on that, honestly.

Evelyn Douek:               That's right. But, no, this is genuinely exciting. I think Nate's right, platform transparency is really important. Data access is really, really important, and the fact that this is bipartisan and is serious and well thought out represents the possibility of actual thoughtful behavior at Congress, which in itself should be appreciated.

                                    So TikTok, big news in TikTok this week is an internal investigation. ByteDance, the parent company of TikTok, found that employees tracked multiple journalists covering the company, and improperly gained access to their data, including IP addresses. And it was clear that the reason that they were doing this was to try and track these reporters, whether they were in the same vicinity as Byte and its employees after there have been some leaks.

                                    In particular, Emily Baker-White has been having a blockbuster year first at BuzzFeed and now at Forbes, and she's reported a lot on some of the improper practices at TikTok. And she was one of the reporters that had been tracked, ByteDance. And so this was pretty big news. I was personally surprised that they admitted this and came out publicly. This was not a leak itself. This was an email to employees. Curious for your take on this one.

Alex Stamos:                 This was interesting. One, like you said, they admitted it. ByteDance is in the middle of a negotiation with the Biden administration. Part of that is they have a number of reasonably well-respected consulting firms in-house, who are looking at data access practices, like Booz Allen Hamilton, who's a big US defense contractor. Definitely, not seen as pro-Chinese in any way. And so I wonder if their hand was forced by some of these internal investigations that this was going to come out no matter what, or be given to the Biden administration.

                                    Crazy timing for ByteDance. There is a bill that's already out that we've discussed in the show, a bipartisan bill to effectively outlaw TikTok in the United States. And this news is definitely providing some more steam behind that. Now this is an issue that exists at any big tech company is you could end up with people trying to use data in ways that they're not supposed to. Companies do things like leak investigations all the time, but usually at a big US company you will have some pretty strong guardrails around that.

                                    My team at Facebook definitely were not allowed to look at journalists and the like. We had to get special permission. One example where we actually looked at data for journalists was when we found that the Russian GRU was feeding hacked data directly to journalists. And so that's something that ended up in our report, and ended up in data that was obtained by the Mueller team under search warrant. But to do that, we had to jump through all these hoops. And that was, obviously, very exigent circumstances when we're talking about a government interfering in US democracy.

                                    Just we're mad at this story, so we're going to attract journalists. That is exactly what any kind of big legitimate company is supposed to stop. So yes, this is not good news for TikTok. And I think it underlines a big fact, which is, in the discussions with the Biden administration on the controls TikTok's going to in place, there's been all this discussion about whose cloud you're going to go into. And Oracle is obviously pushing this because they want the contract to get all this money to keep the data in the US in Oracle's cloud that really has very little to do the actual risk. The actual risk is tied to, what can people get out of either internal interfaces or data warehouses?

                                    It's not clear exactly how the internal access happened in this case, but it could have been either of those kinds of technical means. And we've heard so much less about that. There's also a lot less money in that Oracle. So I am a little afraid that the Biden administration's discussion here is being dominated by a US tech contractor, who's looking to make a ton of money, and much less about the actual risk posed by companies like TikTok. And as we discussed, WeChat.

Evelyn Douek:               Right. Classic example of where companies don't only oppose regulation or legislation, but also try and shape it to meet their own ends. And like you said, this couldn't have come at a worse time for TikTok, where there is finally real momentum. There's been concerns in the air around TikTok for a long time, but there's real momentum with the bill to ban it on government devices, being slipped into the omnibus bill, and the bill to ban it entirely still floating around. So good luck to them.

                                    In other big data breach news this week. Merry Christmas to all LastPass customers, who have to spend their time when they are stranded at airports, updating all of their passwords, this is extremely inconvenient. How big of a deal is it?

Alex Stamos:                 This is a really big deal. It's a big deal on two levels. One, the breach itself is a big deal, and we are going to see the impacts from this breach over the next year. And then the second is that LastPass has really mishandled the response. So on the breach itself, LastPass is a massive provider of password security services. They provide a piece of software that you keep all your passwords in, and we live in this crazy world where we all have pocket supercomputers, but we use this username and password paradigm from 1960s timeshare architectures still. But that's the life we live in it. It's like we all have beehive hairdos and we're typing into an IBM mainframe in Mad Men times. But if we're going to live in that world, then one of the most secure things you can do is have a random password per site, and they keep it somewhere safe.

                                    LastPass had a breach where a backup system was accessed and a bunch of data was pulled down. Now, LastPass, in their response here has done a couple of mistakes. One, they have underplayed the impact. They have talked a lot about the encryption. But it turns out when you look at the details of the encryption of LastPass, there's a lot of weaknesses. One, the URLs themselves are not encrypted. So that means if you are the attacker, you're now sitting on the data of millions of people of all of their passwords. You can see which domains you really care about, and then point your cracking capability at exactly those.

                                    The second is, and this is not a cryptography podcast, so we're not going to get into it too much, but LastPass was not using an extremely complicated key derivation mechanism that is really computationally expensive for the attackers. And so the level of complexity here, not at the level you would want to see. And so as a result, it is definitely possible for the attackers to crack targeted passwords. It is not possible for them to crack the whole thing. So this is not a situation where they're now just sitting on the passwords for all these folks.

                                    But they can look up this person's important and this URL is important. And now I want to crack this one password. And funny enough, with the dumping of cryptocurrencies, it is a great time to build a password cracking cluster, because you can get all these poorly out of warranty, really worn down graphics cards that used to be used for mining Ethereum and the like for really cheap on the black market now. And so for 10, 15, $20,000 you can build a password cracking supercomputer that 10 years ago would've been considered only something that you'd see from a government.

                                    So yeah, this is going to be bad. And LastPass has really downplayed it. They delayed their announcement, they buried it in kind of pre-Christmas news. It's a really unethical thing to do. They should have had an announcement quite a while ago, and they have not acted in a way that really looks out. So I have to recommend that people get off LastPass. Right now I'm recommending 1Password for most of my normie friends.

                                    And then if you're a nerd, there are other options like KeePass and some other ones that are a little harder to use. But if you're looking for something that you can share with your family and friends, 1Password's probably your best bet right now. Their cryptography looks better and some of the things they've documented. That being said, nobody is invulnerable to this kind of attack.

Evelyn Douek:               Lots of people who write their passwords on a notepad on their desk feeling very smug this week.

Alex Stamos:                 Which is not bad. I mean, I've always told people, "Having your passwords," this especially for you're dealing with your parents or your grandparents, "having them write their passwords down in a little black book and keep it in their pocket or their purse." Somebody can't reach from 4,000 miles away and pull that out of their purse, and then all of a sudden have their passwords. So it is not a bad thing to actually put your passwords on a sticky note or a piece of paper that is, it's just not the threat model that most people face.

Evelyn Douek:               Excellent. And with that, we have to move to our inevitable and recurring weekly Twitter segment.

                                    All right, so when we left you last week, Musk had done a poll saying that he will reluctantly, of course, obey the voice of the people and step down as CEO, if people voted yes. They did in fact vote yes. And he has said he will step down. He just won't tell you when or how. It's as soon as he finds someone stupid enough to take the job, which as we've talked about on this podcast, could be a while. And he will remain owner and remain in charge of key parts of the company.

                                    So questionable headlines this week where a lot of the press reported, Musk has agreed he'll step down, just like I have agreed in my news' resolution to really clean up my diet and exercise at some point in the future. So I won't be holding my breath on that one.

Alex Stamos:                 Well, I mean, what's obviously driving this is as we speak, Tesla is now at $112. It has lost 72% of its value in the year 2022. So it is diving and the dive has accelerated. Now, Tesla was always overvalued by a lot of people, but it is clear that Elon, being a part-time CEO and then alienating his biggest potential clients for electric cars was not a great business move. If you were listening to this podcast, you would've known that this was going to happen.

                                    In fact, if you had traded, if had shorted Tesla stock by listening to Moderated Content, you would need to have our mattress coupons or whatever it is going to do as podcasters. So anyway, this is something we predicted, and it continues to drop. That's putting a lot of pressure on Musk and he clearly wants to stay CEO and main character on Twitter, but that seems more and more incompatible with him staying one of the world's rich people through pushing Tesla and trying to push Tesla through what are going to be some real competitive kind of macro headwinds. And also the fact that other car companies have figured out how to make batteries.

Evelyn Douek:               All right, and as the lawyer on this podcast, I feel obligated to remind you that nothing we say constitutes investments or legal advice. Okay. So three Twitter files updates worth discussing this week. So the first is what Musk said was evidence that the US government paid Twitter millions of dollars to censor information.

                                    Sigh on this one. Actually, he's right that the US government was billed by Twitter for millions of dollars, which is a rounding error, but it is true. Alas, for Musk had absolutely nothing to do with content moderation. This is fairly standard practice for data requests under the Stored Communications Act, which the government routinely makes. It requires a judicial order in order for companies to hand over the data.

                                    Legitimate questions here about data request from the government, the law in question, but you know were tweeting about this, Alex, it's absolutely not controversial. Data requests and compliance is something that the social media companies all report in their quarterly transparency reports. This is not hidden and so massive nothing burger on this one.

Alex Stamos:                 Certainly, I think, again, we want to be generous here. The generous statement is, wow, the US government has kind of continuous engagement with US tech companies. That is true. The US tech companies want to stop hackers. They do not like the Ministry of State Security or the Russian GRU utilizing their platforms. And so you do see cooperation on that. You see cooperation on terrorism. You see cooperation on child safety.

                                    And then you also have the part of the relationship that is just legally mandated. So there's a stuff that companies do that they have flexibility on, and I think that's where you can be critical and you can say, "Hey, the companies are too close." This is not something they have flexibility on. The law just straight up says, 2703(d), that if the US government goes and gets a warrant signed by a judge, they can come get your data off of these companies.

                                    This is a law signed by Ronald Reagan. This is not something new. It has been enforced against telcos for years. And the other part of the law, 2706, says, okay, well if the government asks you to go get data, you can charge a reasonable fee for that. And turns out this has been going on for a long time too. You can go right now and you can find a PDF of AT&T's tariffs of all the things they charge the US government for different things. And one things they charge for is for wiretaps that they have an hourly fee, that they have a fee for getting the voicemail and such. And so it's not just the big tech companies, it's anybody who has to respond and has to have teams and teams of lawyers to do this kind of work, ends up having the ability to ask the government to reimburse them.

                                    And for something like 11,000 requests from the US per six-month period, so you're talking about 20 something thousand requests, you end up with a little over $3 million in reimbursements. Not a crazy number. If you do some Googling here, it turns out that there have been companies that have overcharged the government and they've been sued for that, right? There's like a small wireless provider that turned this into a big money maker. But if you think about the fact that Twitter probably has to have something like 30 or 40 lawyers on staff full-time just to handle global requests, three million bucks is a drop in the bucket, because most countries do not reimburse for the costs that they cause here.

                                    And so, like you said, you can go to Twitter's transparency reports, transparency.twitter.com. This is something that started after the Snowden files that the big tech companies started saying, "This is the kind of request we get and the numbers from different countries." If you think the government has too much power to get user data, that's a totally reasonable position. That is a position that the ACLU and the EFF have had for decades now. And you could go fight and say, "2703 needs to change." Honestly, where I would start, if you want to be a civil libertarian on that is either FAA 702 the classified requests, or on location data.

                                    From my perspective, I think the creepiest thing the government does is the warrants where they're able to geofence and get everybody's phone within a certain area. I don't think there's enough controls on that, but there's lots of legitimate arguments there. But the argument cannot be that the FBI is paying for content moderation. That's just completely untrue. And Musk has amplified that and not corrected it. Once again, we're seeing kind of a kernel of legitimate concern that people should have in a free society, then exploded into an interpretation that is completely untrue, just to kind of drive people's grievance politics.

Evelyn Douek:               And unclear that he should be complaining about this really, when it could be one of his remaining reliable revenue sources, at this point.

Alex Stamos:                 He did not announce that they were going to stop asking for reimbursement. That would be an option. If he said, "I don't want that $3 million bucks to pay for some of these lawyers who have to sit there all day looking at these law enforcement requests," that's his choice. Butt his choice is not is that he cannot fulfill those requests. As long as we have the Wiretap Act, as long as we have the Stored Communications Act without Congress taking action, Musk has no options here except to fulfill those legal requests signed by a judge.

                                    They can go fight them. The companies fight these requests all the time. There's a long history of jurisprudence here. And in fact, the Ninth Circuit is one of the only circuits where you really protect people's data, because the tech companies have gone and had these big fights with the US government. In a bunch of the other parts of the country, in theory, a bunch of your data can be accessible without a warrant, because some crazy stuff in the SCA, about 180 days and backups and stuff like that. And so there is a long history here. If he wants to fight it more, then he can. But he hasn't announced any of that. He's just trying to stir people up and get them to believe something that's not true.

Evelyn Douek:               But the next Twitter file drop this week was the one that you tweeted was the first Twitter file thread that doesn't take wild leaps outside the actual evidence. This one came from Lee Fang at The Intercept, and it's about something that we have talked about on this show before, a couple of weeks ago, which is US Government covert information operations.

                                    And basically Lee had more facts about how Twitter handled those accounts. It was that they had been put on a white list, but that was originally because... A white list, meaning that they weren't moderated by Twitter and they weren't going to be taken down. And that was in part because when they were initially set up, they were openly affiliated with the US government. It was very clear that this was government messaging and endorsed with open white propaganda of US messaging.

                                    But then over time, the Pentagon shifted tactics and began concealing its affiliation. SIO has reports about these operations. And so we had known about that, but we hadn't known that Twitter had original awareness of these accounts, and did not shut them down over time. So Alex, what was new for you in this story? And why did you find it interesting?

Alex Stamos:                 Like you said, the core behavior by the US government was not new. If you go to io.stanford.edu and you scroll down, there's a report called Unheard Voice that we did with Graphika to analyze US government, specifically the Pentagon, Central Command and Special Operations Commands, covert influence operations on these big tech platforms. We call it Unheard Voice, because they did such an incredibly bad job that our first tweet thread about our report got more engagement than anything the US government did over a multi-year period.

                                    But that being said, even though it wasn't very effective, I consider it not morally or ethically responsible behavior by a democracy. And unfortunately, if you write a report like this and your honest, we see Chinese and Iranian propagandist now cite our report all the time of why it's hypocritical to shut down Islamic Revolutionary Guard Corps accounts or Ministry of State Security accounts.

                                    So the US government should not be doing this kind of work, period. There's supposed to be a review in the DOD about policies here, but I have not heard any updates of whether or not they're going to change their policy about whether this kind of action is allowed or not. What was new here was it turns out that, like you said, the government had contacted Twitter and said, "Hey, here are some accounts we want protected from behavior. And we want some of them, actually, check marked."

                                    This kind of thing happens all the time. So you'll have the government of France say, "Hey, we'd like the Secretary of State of France or we'd like the health ministry to have a verified account, and please give us a check mark." And so the companies need to have policies about in what situations do they allow it. Generally, if it is a official over account from a ministry, from a state official, they will allow it, they'll give it a check mark, they will protect it so that not any random person can take it down.

                                    You have to have a pretty high violation or a high ranking person has to decide inside the company take it down. But as you said, some of the accounts that they got protected turned into covert accounts. So it looks like the Pentagon actually manipulated Twitter here, lied to Twitter, and that's not cool on multiple levels. I mean, again, the government shouldn't do this kind of work at all, but they certainly should not utilize the goodwill of the companies to try to allow for official accounts to manipulate them into protecting unofficial and covert propaganda.

                                    So, yeah, I think this is actually of all the Twitter file stuff, the one where this was the most legitimate. Now, unfortunately, The Intercept is effectively said, we're propagandists before. So it is nice to see The Intercept, now, citing our work in a positive way, and we'll continue to do it if they want to follow up on any of our other work. We also have work about, say, the Russian Federation, the People's Republic of China that they might be interested in.

Evelyn Douek:               Right. And this is again, I mean, this is a constant theme, but the Twitter files do raise a legitimate issue. One of the issues here is about system design. So we were talking about a similar system at Meta a couple of weeks ago, because the Facebook oversight board released a decision on its crosscheck system, where it had effectively white listed a whole bunch of accounts.

                                    And one of the recommendations that the oversight board made, there was yearly audits of the people on the list to make sure that the accounts were still, the accounts that they said they hadn't been bored, they hadn't been co-opted, or they hadn't, as in this case, been covertly changed to do things that they shouldn't be doing, while they're being white listed to make sure that the list is up-to-date. You don't just get stuck on there and stay on there for the rest of the account's life. And that kind of system design would have caught this issue at Twitter, but instead it was set and forget.

Alex Stamos:                 What you see from these Twitter vis is Twitter is a much smaller company than Facebook. It is a much more chaotic company. And so a situation in which something like that, you could set a calendar reminder, because you have hundreds of people who work in that department who go do it. For Twitter, it's a couple of people and a well-trained monkey typing away at a keyboard, and it's just unrealistic for them to be able to follow up.

                                    And so, again, there are lots of legitimate underlying concerns in Twitter files. It's just those underlying concerns don't naturally always line up with culture war tropes and trying to turn it into a blue versus red thing. And that's really sad about this, because I think one of the... You and I should do a wrap up on the Twitter files when this is all over, but one of my predictions is unfortunately legitimate kind of libertarian concerns about the relationship between the US government and US tech companies are now going to get buried as only being a MAGA thing.

                                    And that's really unfortunate that the kind of real political bent on this and the lack of looking at "the other side." The fact that this kind of pressure comes from all kinds of political actors is really going to make it less likely that a big swath of the intelligentsia is going to take these kinds of concerns seriously in the future.

Evelyn Douek:               Right. And that's a perfect segue to the third Twitter files that we want to talk about, which was one released yesterday about Twitter and other platform's policies during the pandemic. And this one was especially frustrating for me as someone that has written about a lot of the criticisms that this tweet thread was making, but, again, framed in completely conspiratorial ways.

                                    So this was about how Twitter executives and social media platforms in general were struggling with working out what to label, what to take down as COVID misinformation during the pandemic. A bunch of things here. First, the thread shows very clearly that Twitter executives pushed back on some of the demands that the administration was making. They did not fully capitulate, is wording from the thread itself. And when these First Amendment challenges inevitably come to this and saying that the government was jawboning the platforms, that's going to be a real challenge, because the test will be something like coercion. Whether the platforms were coerced into taking things down and so had no real free choice, and the fact that they were pushing back on things will be strong evidence, that was not the case.

                                    The thread pointed out three serious problems. It said, with Twitter process around the pandemic. The first was that much of the content moderation was conducted by bots, which is still too crude for such nuanced work. Again, this is not news. The very first thing that pretty much every major platform did when they rolled out their COVID misinformation policies was say, "We are going to be relying more heavily on automated moderation to do this work, because we've had to send a lot of our contractors and employees home, because they can't work in the office. And this is not the kind of work that they can take home as easily because of data concerns." And they openly said, "This means we're going to have more errors."

                                    The second problem that the thread pointed to was outsource moderation to places like the Philippines where there were non-experts adjudicating tweets on complicated topics about scientific expertise. And third, that there were subjective executives in charge of these decisions within Twitter, who didn't have, the allegation is that it's political bias, but more realistically these are people who are just trying to work things out on the fly and don't know what to do.

                                    The thing that's frustrating for me is that I said at the beginning of the pandemic, and I still say now, that this is such an opportunity to have a sort of big review data collection throughout the pandemic and big review at the end of the pandemic about how content moderation was conducted during this public health emergency. There will be another public health emergency. I think there are really legitimate questions about the way that platforms handle this, the kind of things that they took down.

                                    Masks is the example here that was rolled out at the beginning of the pandemic. There's all of these questions about the effectiveness of labels. It would be amazing, it would be an amazing opportunity to learn so much about how content moderation worked, how these platforms worked during this time. But I think we need to be careful not to have hindsight bias. Now, a lot of the choices might seem obvious, but they weren't at the time. And, again, this kind of opportunistic framing of all of this "revelations" is very frustrating and makes that kind of measured review much less likely.

Alex Stamos:                 Yeah, there are legitimate core issues here. I mean, our colleague Renee DiResta talks a lot about how the pandemic is one of the best examples of how things have changed in social media. That we now see kind of elite consensus created in the open. That days or weeks after this virus comes out, people are demanding to know everything about it. They want to know what protections are in place. Legitimately, people are trying to see all this. And usually science takes years to figure these things out. And it used to be much more close circles of people publishing in peer-reviewed journals that were only being seen by people at universities, we're doing it. And now it's all the pre-prints, it's all the public engagement.

                                    All of this discussion happens in the open, and as a result, it's much messier. We get to see the sausage being made. And in the end, the speed at which we ended up with useful vaccines for COVID is amazing, right? If you look at any other disease in history, this is an incredible outcome. The Trump administration should honestly be extremely proud of Operation Warp Speed, and everything the US government did under Trump to try to pave the way for vaccines to be released quickly.

                                    The flip side of that is that everything up in that science and then the discussions about exactly how useful the vaccines are, what are the possible side effects and stuff are all happening in the open. And so instead of people fairly trying to weigh this, you end up with grifters with political folks. There are people who are making seven figures a year, millions of dollars a year selling newsletters, just trying to play off of people's vaccine hesitancy. And as a result, possibly leading to hundreds or thousands of preventable deaths.

                                    And so in a information environment where both everything is kind of out there and you have these bad actors who are intentionally driving, taking one little part of a conversation that if you have 90 doctors say one thing and one MD says another, that you amplify only the one who disagrees, it's a very difficult place for a company like Twitter to be in, of what kind of amplification do you provide and such.

                                    And yeah, it's tough. I mean, I think just with the Hunter Biden laptop, I think if you did a fair review of how the companies did during COVID, you would decide that there are parts of this discussion where the companies just need to be neutral, because they don't have the information necessary at the time to make an intelligent discussion. They're not qualified. And some of that is just like things that you can't fix if you're at tech company, that you have to allow us society to wrestle with. And there will be downsides to that, but that's just part of living in a free society. But other parts, it's totally legitimate for them to decide not to allow grifters and conmen and really bad actors to utilize their platforms for massive amplification.

                                    For the most part, what Twitter was doing was labeling stuff, like you said, the actual empirical evidence on the usefulness of labeling is extremely mixed. So it's not clear whether that did anything at all. But, yeah, again, another Twitter file where there is a legitimate concern and discussion to be had here, but the whole thing gets framed up in a simple 240 words per tweet, tweet thread, that's all about culture war, red versus blue. And so we don't get any intelligent discussion coming out of it.

Evelyn Douek:               So not over, unfortunately. Follow up piece says, Elon to come next week featuring leading doctors and researchers from Harvard, Stanford, and other institutions. We wait with bated breath. You have some speculation, Alex, about what we should expect there.

Alex Stamos:                 Yeah, I mean think, again, there's a legitimate argument here of if you've got a bunch of different doctors and epidemiologists or MDs with a variety of different backgrounds fighting about virology, even though they're not all virologists, is it Twitter's position to try to intermediate that? And who in our society should intermediate that conversation? Is it the National Academy of Science? Is the American Medical Association? Is it state licensing boards? Pretty clearly it's not the tech companies. So I expect that there will be decisions that were made about labeling stuff and such that should not have happened.

                                    And we have seen kind of legitimate researchers who have something about actual side effects from vaccines or who have science that kind of questions the current prevailing narrative get down ranked or labeled. And that's a bad thing. I don't think that's a great thing. I also don't think it's part of some kind of grand conspiracy where people are pulling the strings. And unfortunately I doubt we're going to get some kind of well-balanced, intelligent discussion of what is the appropriate place in our society for making these decisions. I think we're going to get more culture war.

Evelyn Douek:               Right. I mean it's difficult to remember how naive we were or everyone was back in the day. The pandemic was kind of a big moment, because it was the first moment where platforms said, "Okay, we will arbitrate truth." They'd been saying, "We're not going to be arbiters of truth. We're not going to be arbiters of truth." And this was the first moment where they said, "Okay, we will do it."

                                    And the main reason why they said they would do that was because, in the case of health misinformation, there are authoritative sources that we can point to as guiding our decisions. And what we've learned is that authority in this kind of rapidly developing situation, that kind of consensus breaks down. It's not real. It's illusory. They should have known that. I think many people said that immediately when they made those decisions and those announcements. But here we are.

Alex Stamos:                 Right. And there's a legitimate complaints about the CDC National Health... There were a lot of mistakes made. We get pulled in, SIO, because we did a bunch of work on vaccine disinformation. And we get pulled in to all these conspiracy theories, yada, yada. I'm sure we'll show up in some more Twitter files.

                                    But one of the pieces of feedback we gave folks on the government side was that the government itself has to really adapt how it communicates. Telling people that masks were not effective, not because masks are not effective, but because you're trying to maintain masks for first responders, was one of the dumbest communication mistakes in the history of the US government. The decision to effectively lie to people because you're trying to maintain the supply of N95s for the people who absolutely need it. Instead of going and saying, "Yes, N95s will help you, but really please save those for the paramedics and such. We're working on coming up with more supply." Instead of being honest with people, lying to them was a huge mistake.

                                    And there's a couple of examples of that, of where the communication from official state and local health and federal health officials was politicized, and not a 100% honest, that then really ruined their ability to be kind of fair actors in this. And so I think, again, that is a legitimate thing that should be discussed. But that is not just like a clean culture war, red/blue thing that is kind of like what is the role of a public health service in a social media era? Has totally changed, and doctors are going to have to change how they communicate with patients.

                                    And health officials are going to have to change how they communicate with the populace. And have to treat people like adults and tell them the truth, even when that truth might end up with people making decisions that are contrary to the overall public health interest.

Evelyn Douek:               Right. And I think that's another really good example of how when we have these content moderation conversations and policy conversations in isolation, we're often asking content moderation to solve societal problems that are much bigger than the policy of Twitter or Facebook. And I think that happened a lot during the pandemic, because content moderation is right there in front of you. It's the most obvious leap at a pool when there's this problematic information out in the public sphere. It's just like tech companies clean it up, but as we were saying, it's not necessarily their job, and it can't fix those much deeper problems about government communication, about polarization, and in fact could exacerbate them.

                                    I want to take the opportunity to plug a couple of our previous episodes here. So there was some fantastic reporting from The Washington Post this week about Elon Musk and his understanding of the FTC and regulations, and the consent decree. It reported about a meeting that he had where some staffers raised concerns about Twitter's ability to comply with the FTC consent decree. Musk reassured them that it's okay, Tesla has tons of experience dealing with privacy issues, and walked out of the room. And a couple of hours later, someone from his office sent an email saying, "Oh, by the way, would you mind shooting over that consent decree? We'd love to take a read of it."

                                    And so very, very concerning for people. And if you want a good rundown of why Musk should really read up on this pretty quick. With Riana Pfefferkorn and Whitney Merrill a couple of weeks ago, a title in your feed, Elon puts rockets into space, he's not afraid of the FTC, which would give you good understanding of why he really should be, and why this is a problem. Also, in that story reported that Musk sent an order to his staffers, "Please give Barry," this is Barry Weiss, who's been one of the reporters working on the Twitter files, "full access to everything at Twitter, no limits at all."

                                    A couple of weeks ago we had a cameo from Orin Kerr, professor at Berkeley Law on why this could be problematic when there was speculation about whether Musk was giving reporters access to DMs. And the substantial liability that Musk could face there by opening up your private information to reporters.

                                    So go take a listen to that. Musk, if you're listening, those are two good primers for you. Moving to our sports segment-everything is content moderation segment. Leo Messi claimed another world title this week, where an Instagram post of his celebrating his win at the World Cup last week became the most liked Instagram post in the platform's history. Beating out an egg, which had had the title before, and it now has over 73 million likes. So I'm sure Messi is equally proud of that title this week. And any other sports news from you this week, Alex?

Alex Stamos:                 Just the one thing I'd like to say to the people who work at the NCAA, please do not schedule your championships during the week for finals for every single university participating in it. Just as a somebody who has student athletes in his class, the NCAA is driving me completely insane. So that'll just be my NCAA sub-tweet of the week.

Evelyn Douek:               I'm sure they're loyal listeners. And with that, that has been your Moderated Content Weekly Update. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderated content. This episode wouldn't have been possible without the research and editorial assistance of John Perrino, policy analyst at the Stanford Internet Observatory. It is produced by Brian Pelletier. Special thanks also to Alyssa Ashdown and Ryan Roberts this week, especially for working over the holidays. We really appreciate it.

                                    And if you, listener, really want to become a better person in 2023, consider making the resolution to like, share, and review your favorite podcasts. But in all seriousness, thank you for tuning in to this little experiment. We look forward to more Moderated Content next year. This has been really fun.

Alex Stamos:                 Happy New Year.