Moderated Content

MC Weekly Update: Why?

Episode Summary

Alex and Evelyn discuss trust and safety challenges for federated social media, following a report from the Stanford Internet Observatory about child safety problems in the fediverse. Also: Bluesky's flailing, Tw-... X's rebrand, a First Amendment challenge to Texas' TikTok ban impeding academic research, and more.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

ActivityPub Hub 

X Corner?!

TikTok Corner

Alex's Cyber Doom and Gloom Corner

Sports Corner

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Alex Stamos:

So I think we're going to have to rename this podcast now. I feel like Moderated Content is just way too many syllables and we're going to need as short a name as possible. Preferably, something that's incredibly confusing and does not say anything about the podcast.

Evelyn Douek:

All the people are saying that the best visionaries in business these days, it's all about rebranding.

Alex Stamos:

Right.

Evelyn Douek:

We've got to get on that. Any ideas?

Alex Stamos:

We have minimal brand value, but people who are buying the most recognized names in tech for billions of dollars are then throwing away those names. If they should do it for billions of dollars, then we should do it for the hundreds of dollars in brand value that Moderated Content has.

Evelyn Douek:

Why not?

Alex Stamos:

I was thinking, "Why?" Because then, nobody knows if it's the letter, "Y" or ...

Evelyn Douek:

It's, "Why," because you said they're doing it so we should do it, so that's why.

Alex Stamos:

Why did you say that?

Evelyn Douek:

We should do it, because they're doing it, so we should do it. I think it's the reason why.

Alex Stamos:

Why are you paraphrasing me about, "Why?" I don't understand.

Evelyn Douek:

I was just trying to be helpful and answer your question.

Alex Stamos:

Why not?

Evelyn Douek:

Perfect.

Alex Stamos:

No. I want to name it, "Why Not?"

Evelyn Douek:

Hello and welcome to Moderated Content. Or Why Not's weekly, slightly random, and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek, and Alex Stamos. All right, Alex. This morning, your Stanford Internet Observatory colleagues, David Thiel and Renee DiResta, were out with an important report about child safety on federated social media, which I think is excellent timing and perfect for us to talk about.

We were talking the last time we recorded about your prediction that Threads will never launch ActivityPub support and join decentralized social media. Casey Newton quoted this prediction in his fantastic newsletter Platformer, but Mosseri and Threads Comms have kept reiterating that it's coming, that this is something that they're intending to do.

The official account last week was posting that they're moving towards making Threads compatible with ActivityPub, and that's the way they're headed. You wanted to talk about what happens if it does take this step, and that's where this report from SIO comes in. I think it's a really great thing to talk about. Why don't you go for that?

Alex Stamos:

Well, first, we're going to have to have the global premiere of our Threads ripping sound. This was an MP3 called Boxer Being Ripped. I don't know if this is from a ...

Evelyn Douek:

You can really tell. You can really tell that it's boxers not briefs based on that sound.

Alex Stamos:

Exactly. Let's talk about first the research. When I made that prediction, that was partially in mind that we were obviously well down the road of doing this research. We do a lot of child safety work at SIO, and we've had a project for a while looking at, "What are the impacts of decentralization on trust and safety?" There are both positives and negatives, but one of the big things that we found was that, as you would imagine, there is no centralized authority in the fediverse doing trust and safety.

There's very few people who have any ability to do any child safety work utilizing the kinds of tools and access to hash banks that is considered the bare minimum in the commercial social media space. As a result, our look at the top social media sites demonstrated over a two-day period hundreds and hundreds of pieces of known CCM, and a bunch of stuff that we think is CCM. We're not looking, but we have ... You can read the paper. You can go to io.stanford.edu. It's our top post right now.

David Thiel, our chief technologist, came up with a pretty good plan here. He was also looking for hashtags that are related to CCM trading, and then ran the images from that. Not just through photo DNA for known CCM, but ran it through a nudity detector. The combination of somebody saying, "This is probably a child," plus having the highest possible rating on the nudity detector makes it very likely that is new CCM.

The fact that hundreds of those pieces were not caught by photo DNA means that the fediverse is being used not just to trade known stuff, but unknown stuff. It is becoming a source for this. This is not a huge surprise to people who run big servers, because it turns out a bunch of the source of this is a handful of servers that I'm not going to name that are hosted in Japan.

Japan has a different definition of what CCM is, and unfortunately this is a problem with a number of Japanese servers that some of the biggest servers have defederated from. But overall, the long tail here is long enough that ... I think one of the concerns this demonstrates that I have is that if you are a server operator and you have a reasonable number of people, your server caches the media of the content your people follow. All you need is one bad apple on your server who is not necessarily posting CCM, but they're just following accounts that post CCM elsewhere.

Your server is automatically in the background pulling that stuff down, downloading it, and then storing it on your system or in your CDN. This is one of our big findings here is that this problem has infected everybody in the fediverse and affects anybody who's a server operator, because all of them are now possibly legally responsible for the fact that they are effectively hosting CCM themself. We've already seen this with a major server that was shut down by its hosting provider, because they unknowingly had CCM in their CDN.

They went and took care of it, but they don't have any good solutions. Read our paper. We talk a little bit about this, but this is actually a really hard problem of, "How do you do photo DNA scanning across the fediverse?" We propose a novel cryptographic solution in which, when your server posts an image, you can go get a assertion that it is not CCM that is cryptographically signed. And then, you can have that go with the image, so it would greatly reduce the load on servers to have to verify stuff over and over again.

There's a bunch of interesting challenges with that, including the fact that lots of different systems like to re-encode images and therefore doing that kind of hashing is challenging. Because that's a cryptographic hash, not a perceptual hash to make it actually work, but we're trying to propose a number of options here. I think one of the things that we'd like to and that we're going to be talking to folks about is a creation of some kind of co-op model, where you can have a centralized service that does this at a much cheaper basis that is supported by somebody else.

Back to Threads. What does this mean for Threads? This is one of the things I was thinking about with Threads. If you are Facebook and you already have a significant child safety problem ... Just months ago, we published our paper on what's going on in Instagram. It turns out some of this stuff is still going on in Instagram, so we'll be doing a follow-up blog post. Despite them having some big efforts here, there's still significant problems. At least, that is content that is on their servers where they have the metadata where they can go hunt it down. Whereas, once Threads starts federating with the rest of the fediverse, now you have that content flowing in from all kinds of places.

From Facebook's perspective, it makes it much harder. It also is going to create a very fascinating legal and political problem for other servers. Because the moment Facebook starts ingesting content from other servers, they will be scanning it undoubtedly for photo DNA. There's absolutely no way that they're going to ingest stuff over ActivityPub and then not going to scan it. And then, they're going to create a massive fire hose of NCMEC reports of, "Here is content on other servers." And that is going to create a bunch of legal jeopardy for everybody else in the fediverse.

There's no button you can hit in Mastodon saying, "Scan for CCM," on my server. You have very little option right now, and all of a sudden, your IP address and your server name is going to be reported to NCMEC and effectively the FBI through NCMEC continuously. And so, one, I think that is one of the big issues that is going to pop up here and something that I'd love to talk to the folks at Threads. Hint, hint, if you're listening. I'd love to talk to people who are in the fediverse, because I think we need a negotiated solution and an on-ramp here.

Instead of Facebook just turning it on one day, and all of a sudden, somewhere in those 20 million, 30 million NCMEC reports Facebook does every year, there's also now a ton of things saying, "This Mastodon server. This Mastodon server. This Mastodon server. This Mastodon server," and all those people get knocks on their doors. An outcome here is that you get those servers getting taken down, or worse, that you get people getting arrested for hosting CCM unknowingly. I think that's one of the things that's going to happen.

The other is, as we've discussed, you have your whole spam in-and-out problem. I think the other thing that we're probably going to see is we're going to see limited ability. Not full federation. It's quite possible you'll end up in a situation where content that comes from the fediverse does not show up in the algorithmic time feed, because of the spam problem, and that your content will not flow out of Threads into the rest of the fediverse unless you opt in. But I'm interested in your theory on that, because I think that's where the GDPR and the other privacy expectations come in.

Evelyn Douek:

I love this. Very softball from Alex, suggesting that I talk about GDPR and DSA compliance issues that come with federation, which is such a nice and lovely question. Because honestly, I think that a lot of people don't know the answer to those questions. Not even the regulators in many of these cases. I think it's going to be a situation where these are pieces of legislation that weren't written with the fediverse in mind, and then are going to have all of these on-ramp issues themselves.

Alex Stamos:

I think it sounds like a Law Review article.

Evelyn Douek:

Yes. Meanwhile, I just will jot that down.

Alex Stamos:

Why are you giving this out? Why are you giving these ideas out on the podcast?

Evelyn Douek:

That's right.

Alex Stamos:

You've got days. You could write 100 pages on fediverse GDPR.

Evelyn Douek:

Exactly. No worries whatsoever. I think one of the things that we're seeing just in terms of ... You're talking about the trust and safety side of this, but with the legal side as well, one of the things you see is that the big players, the big guys are generally going to be okay. It's going to throw up all of these new issues, but they're used to dealing with these issues.

They know how to deal with GDPR compliance and they're ramping up for DSA compliance in August, but it's going to create all of these other problems for other people. Each of their individual servers are going to be forced to deal with these problems as well and come into compliance. That's going to be a big problem.

I'm reminded of one of my favorite pieces from the early Mastodon rush from the Financial Times, where the Financial Times on a whim went and set up a Mastodon server, and then shut it down a couple of months later saying it was an awful experience. Part of that was all of the legal compliance and potential liability headaches that come with hosting a server in this situation, like you're saying, where there's not even the tooling or the guides or anything for making that easy or possible.

A lot of these servers ... A lot of them will be micro platforms or small platforms under the DSA and are going to not have some of the most extensive obligations. But I think that there will be a situation where this actually could also be a business opportunity for the big platforms as well, because we're seeing this growth in content moderation as a service. I think you're going to see content moderation legal compliance as a service as well.

I heard a lot of people talking about ... There was TrustCon, the annual industry event in San Francisco. I think it was last week or the week before. Just how many of these providers there are selling this content moderation tooling for these smaller platforms and for people to buy and make their jobs easier ... I think we're going to see a lot of that.

And who is in the best situation to provide content moderation as a service and legal compliance as a service? It's going to be the big guys as well. So I think that'll be a really interesting dynamic that we see for Meta and these bigger platforms as they enter this space.

Alex Stamos:

It is interesting that, as we've discussed, the Europeans are very good about writing their laws. You never catch European companies. Clearly, those laws, if you're not going to catch a European company, you're not going to catch most Mastodon instances. Certainly, none of them fall above that limit right now. The organization that I think is going to have to be really thoughtful about this as Mozilla, because Mozilla is launching their own server.

Everybody else that I know of in the fediverse is for the most part, "Here is a group of people or a nonprofit that is hosting just for Mastodon." Whereas, Mozilla has other revenue flows and has money and all these other things that they do. And so, that might bring them apart. I think the interesting thing here is US law. We're doing some follow-up.

One of our students is writing up the legal issues here and it should be interesting to read, but the US law around CSAM USC 2258A, around reporting requirements, defines an electronic service provider as, "A means in the electronic communication service provider or remote computing service." Unlike the DSA, there's no floor. When this law was passed, it was not considering that people would be running social media sites out of their closets. And so, the responsibility to report does not seem to have a de minimis standard for anybody.

That is one of the fascinating things that will happen here is trying to figure out ... Let's say we're able to facilitate a discussion between Meta and the fediverse and part of it is, "Great. Before you report to NCMEC, you're going to send me an email saying you found something bad, so I could do it. Or we build something in-band to ActivityPub to make sure that the reporting inside of ActivityPub is high quality and that can be handled."

Those instances now have a responsibility under 2258A to report, and I don't think most of them know that. The web here is fascinating. For me, the net-net is the law as built does not consider any of this stuff.

Evelyn Douek:

For sure. In general, I think that there's a risk that decentralization is seen as a way to get around all of these problems. Or has been seen as a way to get around a lot of problems. And all of the problems around content moderation or trust and safety or legal compliance has been, "Well, let's decentralize."

That is creating an excuse for not taking all of these obligations as seriously, because, "No. We're just doing a different kind of thing here." But no, you're not doing a different kind of thing. There's all of the same problems that are coming up as you see in centralized social media or other spaces, but just without the tools and the knowledge and the resources to deal with them as effectively. Speaking of, let's ...

Alex Stamos:

Speaking of, we don't have a Bluesky sound. I'm not sure what that would be.

Evelyn Douek:

Twinkling. That sort of dream wave sound of a blue sky and clouds.

Alex Stamos:

Let's try this. Let's talk about Bluesky.

Evelyn Douek:

Perfect. I love it. Blue Sky is actually a great example. It's perfect because it has been having its own content moderation problems in the last few weeks. This is the Jack Dorsey-sponsored decentralized version of Twitter that's still invite-only and notionally beta, but is now in the hundreds of thousands of users.

I don't really know what it means to be calling it beta anymore. It has been having weeks of controversy from fallout from a user being able to register an account with a racial slur in the handle, the N word. And then, poor comms from the team, in response to the outcry, not really understanding the community's concern or worries about what this indicated about trust and safety on Bluesky.

Now Bluesky is notionally, like I said, it's going to be a decentralized platform. Or that's the vision for it, but it's not at the moment. At the moment, it's still all on the Bluesky server and all happening in-house. Bluesky finally published a post about this incident in the last few days explaining what went down. It's an interesting post. I think it's a good post in terms of talking about the trade-offs that trust and safety issues raise.

The reasons, for example, why they didn't think you could have an automatic block list for all sorts of slurs to prevent users from having these words in their handles. One of them being people reclaim slurs. And so, it can be an important way for people to indicate that they're reclaiming the slur. Another one being one of my favorite content moderation problems, the Scunthorpe problem.

Now Scunthorpe is, I believe, a nice little town somewhere in England or Wales, but it was having all of these problems when it first started to get online in advertising its location and how nice it was for tourists to come here, because it kept getting blocked and sent to spam and things like that. That's because, if you look at the words, "Scunthorpe ..." I'm not going to say it on this.

Alex Stamos:

No. You have to handle this part. I just wanted to say for everybody listening, Alex is actually walking out of the room at this moment so Evelyn can talk about this.

Evelyn Douek:

This is all on me. I'm not going to say it out loud, but let's just say that somewhere buried in Scunthorpe is a word that is considered pretty bad or very bad and ends up on a lot of block lists. That's the example of why brute force block lists are not the best way necessarily of handling problems like slurs in user handles, is that you end up catching a whole bunch of other stuff and having a whole bunch of false positives.

Now, Bluesky has since said, "Well, we realize now actually maybe some false positives is a better way of handling this than allowing people to register handles with the N words and then having human review." The whole fact that this was a massive afterthought and they hadn't put this pre-thought into something that was a very likely issue ... It just shows the concern that this dynamic of, "We're decentralized," shouldn't become synonymous with, "And therefore, we don't have the same trust and safety obligations." Or, "We're not going to take them as seriously."

Alex Stamos:

Bluesky is fascinating thing. One, they say they're decentralized. They're not. It is right now completely centralized. There are some decentralized features, but the content all flows through one set of servers that they control. It's pretty clear that just like you would expect based upon Jack Dorsey's original vision, they use the idea of decentralization to punt. The idea of, "By being decentralized, we're not going to have to have centralized content moderation. Because everybody gets to bring their own content moderation."

That's an interesting theory. One, the mechanisms to allow for that don't exist. If you're going to do that, then you have to launch with those mechanisms so people can bring their own content moderation. But as we've talked about in this show, there's really two types of trust and safety issues. There's the trust and safety issues where you as an individual are part of the conversation, and therefore, a model in which you run your own code, either in the cloud or locally, that interacts with the API in a way that you have all the tools to protect yourself is not a completely crazy idea.

It might not be super practical, but at least theoretically it makes sense. But there's also the trust and safety issue of, "There's people on platforms that do stuff where the harm then accrues elsewhere." Now, this problem specifically is closer to the first one. Racial slurs are generally the kind of thing that harm the people who see them or they're sent to. Or they're used to abuse people.

You can also argue it touches upon the second one, because if your platform allows for this kind of free ruling, then you can create subcommunities where people are just being super racist, saying horrible things, and then that leaks out of that community and you create. It's like, "Well, we'll just contain the cancer," and instead of cutting out the tumor, we're just going to take care of all the metastasis that spreads out throughout our body. Which is, to my understanding, not how oncologists generally operate. But that's kind of Bluesky's model here.

I think this goes to a fundamental issue, which is even if you give people really good self-moderation tools, which I would love ... The fact that Twitter has made it so that things like Blockparty and such don't work anymore is a terrible idea. But that doesn't mean that you aren't responsible for the base community. Because if you allow these things to become the standards, then that will eventually leak out and get around the end protections that you put in place.

There's a specific problem here, which is they just weren't thinking about this. Like you said, they had the wrong false positive. For any trust and safety issue, you can come up with false positive sides. And then, on this one, you're like, "Well, a false positive is not that bad." If you can't have the N word in your name on our platform, then life is probably still okay for that person. Versus the town in England. That's a base problem, but I think in the long-term it does demonstrate that there is a basic vulnerability in Bluesky's architectural idea around how to do content moderation.

Evelyn Douek:

Okay. We should head to our Twitter corner or X Corner.

Alex Stamos:

The X Corner. We're going to use the same sound, but yes, now welcome to the X Corner.

Evelyn Douek:

The sound still fits. If anything, it fits all the more. I was joking that the reason why we had to take the week off last week was because our sad trombone player had lip strain from all of the hard work that they've been doing on this podcast. We are recording mere hours after Musk has instituted the rebrand of Twitter to X.

Things like this, I actually find surprisingly sad. I really loved Twitter for what it was. I think it has brought a lot of good to my life and my world and brought me a lot of community. It was really good for my success and career and things like that. And so, seeing the platform be lit on fire like this is sad. I don't really know what to say.

Fortunately, CEO, Linda Yaccarino said it better than I ever could, "X is the future state of unlimited interactivity, centered in audio, video, messaging, payments, banking. Creating a global marketplace for ideas, goods, services, and opportunity. Powered by AI, X will connect us all in ways we're just beginning to imagine."

Alex Stamos:

I'm pretty sure that statement is powered by AI. Right?

Evelyn Douek:

Exactly.

Alex Stamos:

Because if you asked ChatGPT, "Please give me a super generic read for a social media slash payments platform," that's what you'd get.

Evelyn Douek:

I actually think you'd probably get something better. I don't know any of those words mean. It's unclear to me. But I do like the idea that ... The thing that we really want from Twitter right now is payments and banking, because that is definitely, after everything I've seen over the last six months, where I want to give all of my financial details to.

Alex Stamos:

The fact that when I make the most milk toast comment, I get people either telling me I should die or that I'm a traitor or that it's just all cultural all the time ... It makes me feel like, "These are people who I want to have my financial future in-place with." Awesome. Fantastic.

Evelyn Douek:

Absolutely. I don't know if I have anything clever to say about this rebrand or what it suggests is going on.

Alex Stamos:

It is the greatest destruction of brand value in history. Of the $44 billion that he paid for Twitter, 90% of that was goodwill. From an accounting perspective, that's how it was justified by the banks that loaned him money and he just threw that all away. It's amazing. It's just kind of amazing. We are watching some of the worst business decisions made in the history of capitalism. It is an amazing time to be alive.

Evelyn Douek:

The CEO is still there. Sorry. You can't see my air quotes. "The CEO is still there." Outlasting some managers.

Alex Stamos:

You can hear it in your voice. The, "CEO."

Evelyn Douek:

Exactly.

Alex Stamos:

She's doing a lot of chief executive-ing right now. For sure.

Evelyn Douek:

Absolutely. I'm sure she had plenty of say in this extremely core business decision that just got made.

Alex Stamos:

Well, I'm sure their engineers too. Right now, if you type x.com, it just redirects to Twitter. Changing the base domain upon which Twitter operates is going to be a humongous, massive project. There is no way that the Twitter team abstracted out in a variable, "What is our domain?" Pretty much, that has been the same for 15 years. There's literally probably two million references, three million references.

If you control F'd through the source code of Twitter, there are millions and millions of places where the code is going to have to be changed. The idea that you can just announce it and be like, "Tomorrow we're going to be x.com." And then, you know he sent an internal email or Slack message of, "Hey, guys. Change the domain upon which our website operates." There are a number of chief architects, whoever is left, that are drinking heavily this morning. Or once again, a wave of resumes are flying out of Twitter.

My understanding is lots of people left have visa issues and such. There are a lot of people who are basically trapped there so their families can stay in the United States. There's lots of immigration lawyers being contacted this morning, because that is just a completely and totally ridiculous ... When Facebook became Meta, they didn't rename facebook.com. Every single E5 and above, every senior engineer would've walked out the door if Mark Zuckerberg told them, "You have to change facebook.com to meta.com."

Evelyn Douek:

Right.

Alex Stamos:

They just created a new website that was the holding company. This is an absolutely insane idea that x.com is going to become their base domain.

Evelyn Douek:

Linda Yaccarino seems to be in a hostage situation based on these tweets, but I don't think it's visa issues. I'm not sure what's going on there.

Alex Stamos:

I'm pretty sure she's allowed to stay in the US even if she quits. Linda, blink twice. What's going on, honey?

Evelyn Douek:

That's exactly what these tweets sounded like. It's really hard to believe, given all of this, that the platform is not profitable. Musk confirmed via Tweet in the last couple of weeks that they're still at negative cash flow due to around 50% drop in advertising revenue plus heavy debt load.

Not surprising that the ads haven't come running back. We have seen Twitter try really hard ... Sorry. X try really hard over the last couple of weeks. Musk doesn't believe in deadnaming, so if I slip, I hope he doesn't take any offense. Is that one okay? Can we keep that?

Alex Stamos:

Let's just let that hang there for a second.

Evelyn Douek:

Carry on.

Alex Stamos:

Keep going.

Evelyn Douek:

So Twitter ... There was an article in Bloomberg in the last few weeks about the spike in harmful content on the platform, which for some reason, did prompt a slew of responses from the company, which has generally remained silent. Including all of the corporate comms accounts and Linda Yaccarino, which all included a bunch of unverifiable buzzwords about how the article was wrong, saying that these stats are outdated and wrong and that 99.9% of Tweet impressions are healthy.

The problem is, what does that mean? What does healthy mean? It's impossible to verify. The best part of this corporate comms response was that, "Each step of the way, Twitter has been more transparent about this work than other platforms," which is just hilarious given all of the shutting down of the API and all of the outside work that has been turned off as a result of the last few months.

Look, it's great if they really are doing so well on the trust and safety front. It would be wonderful if any of this was externally verifiable or checked by outside researchers that aren't being paid by the company to produce these reports.

Alex Stamos:

We have talked about some of the ... Now I'm going to have to do the air quotes. Some of the, "Research groups," that do research on bad stuff on Twitter are activist groups. They're activists. They're trying to affect political change. That is a totally reasonable thing to be an activist for your political position, but they try to do that through what they call research, and it is not something that would be accepted in any academic context as being appropriate.

I do think there is a point here that we keep on seeing activist groups jump upon the very real problems at Twitter to push their specific agenda, or their specific point, without backing it up in a way that's really quantifiably supportable by the evidence. Nor are they publishing in a peer-reviewed situation.

Evelyn Douek:

And then, the media is also really attracted to those headlines.

Alex Stamos:

Yes.

Evelyn Douek:

And so, they often get picked up without ever being checked as well.

Alex Stamos:

I know that because what happens is myself or one of my colleagues at SIO ... We'll get a call from somebody in the media who says, "What do you think of this report?" We'll say, "Well, it's not that good." And then, they publish it anyway. I have seen now a couple of stories where the reporters have said, "What do you think of this?" And I will say, "I would not, if I were you, publish anything from that group. Because that is not well-respected, academically valid."

This is not coming from Josh Tucker at NYU. This is not coming from Brendan Nyhan. This is not coming Kate Starbird, any of our colleagues who we respect. I would shy away, and then they'd just forget that they had that conversation with me, which is unfortunate. Yes, there is a problem here. But also, the idea that they're doing better trust and safety is just completely implausible to anybody who uses it.

Right now, if I open up Twitter ... I just checked. I am added to a bunch of lists by very attractive women. I'm sure it is absolutely those women. Those are the real headshots of the people who are adding me to these lists saying, "Hi," or "Contact list." Or, "Hi, handsome." Stuff like that. There is a huge amount of ...

Evelyn Douek:

This podcast has been really good for you, Alex. You're getting out there and getting lots of attention. That's great.

Alex Stamos:

Yes. Once again, getting me in real trouble. I have all of those. I have a bunch of blue check marks with 80 followers that were created in the last six months calling me all kinds of bad names. Every single post I go to has a cryptocurrency scam underneath it.

For anybody who uses Twitter, it has clearly become a disaster. But that doesn't give an excuse, just because you know it feels true, to then cook the books on the quantitative side. And unfortunately, that is a little bit of what's happening here.

Evelyn Douek:

Talking about the importance of academic research on platforms. Can I get a legal corner sound effect please? Great. In the last couple of weeks, the Knight First Amendment Institute at Columbia University brought a challenge on behalf of researchers to Texas's TikTok ban. This is a ban that is not statewide and it's not on all phones and devices, but it bans employees of state agencies, including public universities from downloading and using TikTok on state-owned devices or state-issued devices.

The lawsuit challenges this ban application to public university faculty saying that it compromises academic freedom and impedes vital research. Including the research that would be necessary to verify or investigate the claims made about TikTok and whether it's harmful. These are researchers at Texas universities who are looking into Chinese-influenced operations or cybersecurity issues or whatever it is on the platform.

That kind of academic research is really important and it's important to verify government claims about what's happening on the platform of course as well. That kind of research is all being hampered and made impossible by the fact that these researchers can't use TikTok on their work computers or work devices. They can't teach about it in class. They can't show TikToks and also interactions with students are impaired.

So that's the basis of the lawsuit. We're going to be covering this a lot as the case develops, but I wanted to flag it here now. It's still very early days, but I think it's a really smart lawsuit at a critical time. Because while you might think that maybe this is a much smaller ban, it's maybe not something that we should be as concerned about as Montana's statewide ban ... It comes at a time where both internet freedom and academic freedom are under attack in Texas obviously, but also beyond. I think bringing a lawsuit that stands up for both of those principles is great and is really going to be worth watching.

Alex Stamos:

I just want to give a shout-out to our friend, Jameel Jaffer, the director of the Knight Institute, who I think is very smart about all this stuff. He has a very good idea of, "How do you defend the First Amendment," in a totally reasonable way, in a thoughtful way, and targeting things that really matter. This is a great point. It's just ridiculous. These TikTok bans are ridiculous at a state level.

As we've discussed, I think there's lots of reasons why you might want to ban TikTok, but you should do so in a structure that applies to all social media companies. And that has to be at the federal level. We cannot have 50 different rules about what kind of social media companies exist. But practically, if you don't like TikTok, then saying that researchers at the University of Texas cannot criticize it is just dumb.

To your stated point of both the risks of TikTok and the criticism of the company, this just makes no sense. I think also just the bigger picture beyond TikTok here. From our perch in academia, it is becoming very hard to be a professor at a public university in a handful of states. Texas and Florida being the highest. You're ending up in the situation where the state governments are interfering in tenure discussions, are blowing up people's job offers. There's the whole thing at Texas A&M about that.

I think that's very sad, because the high school students of Texas and Florida deserve to have very strong public universities that they can go at. It will take a while, but in the long run, the University of Texas is a fantastic institution. There's a lot of great people there. Bobby Chesney is one of the people I think of. There's a lot of people there I love to work with. And there's no reason for those people to stay if this continues to be the direction of the state making it hard just to do your basic job of academia.

Evelyn Douek:

For sure. When I was on the job market even a couple of years ago, I interviewed at UT Austin and would've been thrilled to have a job there. Even now, just a couple of years later, I think I'd be a lot more hesitant about it given what it would mean for my career and security going forward. I think it's really important.

And I hope to get Jameel and senior staff attorney, Ramya Krishnan, who's been working on this case a lot, on the podcast in the coming weeks as the case develops to talk more about it. As we talk about TikTok and transparency, I just want to note this. The flashy rollout of the European Union's Digital Services Act continues. I don't know that I've ever seen quite an impressive road show for a piece of upcoming legislation as they seem to be doing for this.

This week, European Union Internal Market Commissioner, Thierry Breton, one of our favorites on the podcast, we've talked about him a lot, carried out a so-called stress test at TikTok's Dublin offices and gave them a fail grade. We've talked about how it's unclear what these stress tests involve or whether they could possibly be meaningful in any possible way, but they continue to get these headlines and they continue to work well in terms of drawing attention to the very serious and important European legislation that is upcoming. I don't really know what's going on or what purpose these public announcements serve, but anyway, it has been fascinating to watch.

Alex Stamos:

I think something the Europeans can learn from America is the best way to advertise your law is to come up with a ridiculous backronym like the USA Patriot Act. DSA. That's so boring. It really should be the America Tech Sucks Act, and every single one of those letters is a full-on word in German. That's really what we need here. Congratulations on the Rose Show, Europeans, but on the branding of the laws, you guys can do a little better.

Evelyn Douek:

We will take pictures for better branding for the Digital Services Act as the compliance deadline approaches on August 25th, and we will announce the winner that week if we get some submissions. That's the news roundup for the week, except for one important story, Alex, that you wanted to flag that is not a trust and safety story exactly. Or a content moderation story exactly. But I had completely missed it. Tell us about it.

Alex Stamos:

There's something that's going on that should be front page news in a bunch of places, and it's not. And it demonstrates how effective Microsoft is in their PR strategy and their government relations strategy that it is not.

On July 12th, CISA, the Cybersecurity Infrastructure Security Agency of DHS, upon whose advisory board I sit, the defensive coordinator of cybersecurity in the United States, released an announcement that a federal agency ... We found out later, this is the State Department, discovered that a threat actor, and we found out this is the Chinese government, was reading the emails of folks in the State Department utilizing a vulnerability in Office 365. Microsoft 365.

This is the cloud service. For a long time, I've told people, "Do not run Microsoft Exchange," which is the email server you run yourself. If you want to be in the Microsoft ecosystem, there's really only two major email ecosystems. There's the Google one or the Microsoft one. Most enterprises are in the Microsoft one for a number of historical reasons ... Then, you should be in the cloud-hosted.

Well, the cloud-hosted, the Premier, which is supposed to be the most secure way you can run Microsoft's products, was compromised by the Chinese. You're not just talking about one or two accounts. You're talking about a bunch of accounts including the Secretary of Commerce. You have cabinet level secretaries whose emails are being read by the Chinese government.

This initial report was already really worrying and had a number of interesting things in it, including a very ... It was subtle, but to those of us who are in this space, it is pretty obvious a backhand slap by CISA against Microsoft against one of the hardest problems here, which is that you have to pay for the most expensive version of Microsoft 365 to be able to detect whether or not you were affected by this attack.

It is only because the State Department was on what's called the G5 license, the most expensive license, that they had the logging in place, and that they had some smart people who looked and double-checked and triple-checked, "What's up with these app IDs" that they found it. Microsoft did not discover this. This was discovered by customers of Microsoft.

One of the problems here is that Microsoft charges people for basic security features, and that has been highly controversial. They have reversed course on that after this. It's unfortunate it's only after this. But then, there's been another analysis by a company called Wiz that we can link to in the show notes, which is a well-respected cloud security company, in which they talk about the vulnerability here, which is some kind of mechanism that allowed the Chinese government to forge authentication tokens.

In any cloud service, you log in once and then you're logged in for a while. That's true either if you're doing it through a browser and it's using cookies or you're doing it through an API. In both those cases, you talk to a single authentication service. You authenticate yourself, you prove to it that you're legit, and then they give you some kind of cryptographic token. And then, that token is useful in all kinds of places.

In browsers, that's mostly in cookies. Although it's submitted also as hidden form fields sometimes. In APIs, it is a field in the API call, but those tokens have a nice standard cryptographic mechanism that make them hard to forge. Somehow, the Chinese have figured out how to forge it. We do not know the exact root cause here, but probably they broke in and they were able to steal some keys from inside of Microsoft.

Wiz is saying that those tokens were able to be used against almost any Azure service. This could be a much worse breach. This is possibly the most important attack in the history of cloud enterprise computing, because it possibly affected every single organization that is on Microsoft 365, which has to be probably 90% of the Fortune 500 and the vast majority of the US government.

Everybody but the Pentagon. The Pentagon, interestingly enough, run their own mail system, mail.mil, that was not vulnerable to this, but everybody else was. Anyway, huge story. Really big deal. This is going to have a real huge impact on enterprise security teams across the country who now have to go figure out whether they were affected. It is a big black eye for Microsoft. I think it should be a big wake-up call.

There's also some pretty good Twitter threads, that this is a significant warning for issues overall from Microsoft of them not taking care of security. That they've had a really bad history over the last couple of years. I do think that that's true. Yes.

Evelyn Douek:

All right. Something to watch. Before we leave, we can't finish without a sports corner. When I posted on Threads that we were taking the week off last week ... We have the best fans and someone replied, "My disappointment is immeasurable and my Monday is ruined. You are my only source of sporting news."

We cannot let this listener go without a sporting update. Huge news, Alex. I have a sporting update this week. I'm very excited.

Alex Stamos:

This is amazing.

Evelyn Douek:

It's a big moment for me.

Alex Stamos:

I'm writing down ... I'm putting down the date. I'm so proud of you, Evelyn.

Evelyn Douek:

Thank you. The update is about Sam Kerr's calf muscle, which is the whole hopes of the Australian nation are riding on at this moment. I hope people know that the Women's Soccer World Cup is happening at the moment in New Zealand and Australia. Many of our listeners will, because of course, the US is a famously amazing soccer team as well. And so, this is a big moment for them. But Australia's really excited about this.

We are really hoping for good things. In part, because of our star striker, Sam Kerr, who is an amazing football player. The New Yorker had a wonderful profile of her this week. I learned some stats about her that I didn't know. In between 2017 and 2012, she was the top scorer in every league she played in across the highest tiers of Australia, the US, and England. Sometimes simultaneously. It's been nearly four years since she last played in the Women's Soccer League in the US, but is still the league's all-time top scorer. She was actually a Chicago player when she was here.

Alex Stamos:

She is like the Michael Jordan of women's football, in this case. Right?

Evelyn Douek:

Right. But much smaller and slightly more adorable. A lot of hope is resting on her. In devastating news, in a training game in the warmup for the first match, she has done something, people don't know what, to her calf muscle. The team is keeping very mum about all of this. A teammate said the other day that she might have torn it, which caused everyone to have heart palpitations in the country.

But they've since walked that back and said, "No, no, no. It's just a strain." We're hoping she'll be back in the next couple of games, but this is a huge thing for us. The Australian women's team have now sold more official jerseys ahead of the World Cup than the men's team has sold since the Men's World Cup last year in the entire lead up to that. So if you have a spare prayer, please send it out to Sam Kerr's calf muscle.

Alex Stamos:

This is literally, I'm looking right now, front page news in Australia. This is the hopes and dreams of the entire continent. Apparently, if you guys don't do well enough, the Matildas, which is a great name for a team ... If the Matildas don't do well, the whole thing is, "We're just going to turn ourselves back into England."

Evelyn Douek:

The whole nation is grounded, basically.

Alex Stamos:

"That's it. We're becoming part of the British Empire again. Why does this thing even exist?"

Evelyn Douek:

This is a big moment. We are all sweating this and looking at all of the twists and turns. Trying to read all of the tea leaves and expressions on all of the players' faces. Trying to work out what's going on. We will see and we will keep you updated, our listeners, who I know are going to be heavily invested in this story. It's what they come to this podcast for. For sure.

Alex Stamos:

Come on back. We'll have to get the Sam Kerr's calf sound. We'll have to figure out what that possibly is.

Evelyn Douek:

Ow. Something along those lines.

Alex Stamos:

Or some kind of Waltzing Matilda, but it goes sad. It starts and then slowly slows down. Waltzing Matilda.

Evelyn Douek:

Waltzing Matilda. That content is definitely not what the listeners come to this podcast for. We are going to call it there, folks. This has been your Moderated Content Weekly Update. The show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderated content.

This episode wouldn't be possible without the research and editorial assistance of John Perrino, policy analyst extraordinaire at the Stanford Internet Observatory, and is produced by the wonderful Brian Pelletier. Special thanks also to Justin Fu and Rob Huffman. Talk to you next week.