Moderated Content

MC Weekly Update 5/1: Flops and VLOPs

Episode Summary

Alex and Evelyn talk about the totally unsurprising news that Twitter is complying with more government take-down orders under Musk and the very underwhelming "transparency report" they released this week. Also: the EU designated 19 platforms VLOPs and VLOSEs that will need to comply with the most onerous requirements under its new Digital Services Act; it looks like someone in Montana finally talked to a First Amendment lawyer, as the Governor requests changes to its TikTok ban bill; more bipartisan moral panic on the Hill with the Protecting Kids on Social Media Act; and Bluesky's trust and safety challenges ahead.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Twitter Corner

Sports Corner

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Alex:

I've got sports news, but-

Evelyn:

Oh, yeah?

Alex:

... I can do that for the end.

Evelyn:

Wow, what a teaser. All right, let's stay tuned, listener, for the sports update at the end of this episode. But for now-

Alex:

This is what people-

Evelyn:

Yeah.

Alex:

That's what people call in for. Listen to this show for.

Evelyn:

That's right. 25 minutes of blah, blah, blah, blah, blah. And then, five minutes at the end of actually relevant updates.

Hello and welcome to Moderated Content's Weekly News update, from the world of trust and safety, with myself, Evelyn Douek and Alex Stamos. We are going straight to the Twitter corner this week [inaudible 00:00:35].

Okay. So Rest of World, which is a great little outlet that focuses on tech stories from around the world, had the goods this week with a story that confirmed what we suspected and predicted all along, which is that Twitter is complying with more government demands under Elon Musk than it was before. And that actually, based on the data that it had, the company has not refused a single request since Musk took ownership.

So, it is both getting more take down orders and complying with more of them. So, it may be that governments have worked out that they're getting less pushback under new ownership, and so therefore they're asking for more things. And the compliance rate, which previously hovered around 50%, has now gone up to about 80%.

And this data was compiled using information reported to the Lumen database, which is maintained by the Berkman Klein Center at Harvard, which is a voluntary process, and it looked like it had been automated within Twitter. So, those reports just kept coming in even through the change of administration.

Well, it turns out Twitter has now stopped reporting take down orders to Lumen as a result of April 15, according to Lumen. So, a good little story that had some predictable insights, but don't expect to see more of these given that little avenue of transparency has now been closed off as well.

Alex:

So, I mean, think it's time for the victory lap for the two of us. This is something we predicted exactly, that Twitter would reduce its pushback against governments. That despite Musk talking about freedom of speech and freedom from government requests, that the fact that the man who owns it has all of his money in a company that makes physical things and ships them around the world, and in fact makes a lot of those physical things in the People's of Republic of China and really wants to get into the Indian market, who has been meeting with Modi, has been meeting with Indian officials trying to break into India, now the most populous nation on the planet, that he would not fight governments in their request.

Totally predictable, something we both said. Elon Musk does not care about censorship. I'm just going to say it. It was completely obvious from the beginning that this is where we were going to end up. It became also obvious because he fired all these lawyers saying that they were deep state plans, yada, yada. Those lawyers had a long history of fighting against requests from around the world, including the United States, to censor content online. Long, long history. Vijaya, Jim Baker, these are people who had gone to court over and over again around the world, or who had pushed back in a variety of ways against international requests.

And they were fired as deep state plans, yada, yada. In some cases, really smeared by Musk and smeared by members of the House and such. And turns out those people were doing their job and trying to protect the speech of Twitter users.

So, I do not want to hear from the Musk's hands or the blue check marks ever again about deep state conspiracies, and censorship, because Twitter is now way worse because of this. It is not a safe platform. If you live in an authoritarian state, it is not, or even in a democracy that is trending that direction, like India, it is not safe to use Twitter anymore.

Because the Lumen database has the take-down request. It does not have the data requests, and that is data that we no longer have because Twitter has stopped their transparency reporting of what they're doing on providing the user data to these governments. But if they're stopping of fighting on the take-down request, then the odds that they're actually standing up for privacy of individual users against authoritarian states, and against democracies that do not have the kind of legal protections we have in the US, is pretty much nil.

Evelyn:

Right. So they did a release, a blog post this week, that purported to be that kind of transparency reporting, because of course Musk has extolled the virtues of transparency and how important it is.

And it includes both its voluntary take down orders and just on the, we'll come back to that, but on the government request thing, it's completely ridiculous. This kind of transparency reporting, as you're saying, Alex. There's just nothing here that sort of describes what Twitter's doing anymore.

Alex:

Right.

Evelyn:

It says, "We've received approximately 53,000 legal requests to remove content from governments during the period, and also 16,000 requests for user data from over 85 countries. Compliance rate for these requests vary by requesting country." That's it.

Alex:

Oh, that's useful. It varies?

Evelyn:

Yeah. Sometimes it's high, sometimes it's low.

Alex:

Yeah.

Evelyn:

Are you in one of those countries? Who knows? Just-

Alex:

I mean, I think this is a pretty good indication that the numbers have gone worse, right? That they created the table for the actions they took on enforcement, of their own enforcement of rules.

It would've taken them a couple of hours to create a table here based upon the actual data request. The fact that they did not means that the numbers are bad.

Evelyn:

Right.

Alex:

So, here's a DSA question. Will they have to, obviously they have to report a bunch of stuff that happens in Europe, is there anything in the DSA that requires them to report what's going on in Turkey, or India, or anywhere else around the world?

Evelyn:

So, I don't know the answer to that question. I think the answer is no.

Alex:

Yeah.

Evelyn:

Because I don't think that they have jurisdiction. But, yeah, I'd be happy to be proven wrong if someone wants to write in and correct me.

The interesting stuff on the transparency reporting, the other table that you mentioned, of their enforcement of their terms of service, I mean, this is a guy that has took over promising, rolling back the content moderation rules. This part of the report insists that Twitter is removing a lot more content than it was previously.

Alex:

Yeah.

Evelyn:

So, it's saying that there's an increase of 29% of enforcement of the rules of taking down content, and a 20% increase in taking down accounts, and 28% increase in suspending accounts. And all of those figures in the list are pretty much in keeping, I went through and compared them with the last transparency report before Musk took over, and they're pretty much in keeping with what they were reporting the same across categories of hateful conduct.

I mean, obviously, COVID misinfo is gone. But the others, violence is way down, but illegal or certain regulated goods or services was way, way up, which is nearly double. I don't know what would be causing that. There's certainly not enough information on this report to make it clear, but.

Alex:

That kind of spammy thing can be very attacker-controlled. That you could have just one group, that now that they have fewer people working on front end spam prevention, that we're able to get through, we're able to get a bunch of stuff up and then it was taken down manually or the creation of a new classifier.

So, it's hard to read too much into these for anybody, just because depending on what spammers are feeling like that day.

Evelyn:

Right.

Alex:

Or if they have a new thing to sell, or if they've got a new way to get around your fake account detection, those numbers can double or triple. And it doesn't mean much other than that one group in Nigeria had a good month.

Evelyn:

Right. So yeah, you can't tell very much from this, except that you can tell that there wasn't some massive big red stop button pushed on the content moderation.

Alex:

Yeah.

Evelyn:

We don't know anything about accuracy. We don't know what was taken down or anything along those lines, but it certainly seems that content moderation is continuing and very much in line with what it was.

But that, again, doesn't reflect underlying trends. So we don't know if that means that they're like, it doesn't, they're not reporting prevalence rates in terms of what people are seeing on the platform or anything along those lines. So, if the baseline is changing these numbers, the relative take-downs could or maybe should be very different, but we just don't know any of that. And we won't, without API access, as that's all being shut off as well. So, transparency report that basically reveals very little.

Speaking of the DSA, we are headed over to Europe.

Speaker 3:

[foreign language 00:07:43]

Evelyn:

April 25 was a big day for the Digital Services Act, because the European Commission made its first designation decisions of VLOPs and VLOSEs, which is what I'm choosing to call them, very large online platforms and very large online search engines.

These are the platforms that reach at least 45 million monthly active users and will need to comply with the highest level of obligations under the DSA. And those will come in around four months. So we are going to see, for example, by the end of August, these companies are going to have to start doing their very first risk assessment reports, and they're going to have to have independent audits a year later.

All of these platforms, they're basically the ones that you would've expected, you know, you see your Facebook, your YouTube, Twitter, TikTok, Snapchat. One there that I hadn't heard of, Zalando, which is apparently a very successful German shopping platform. So, good for it. It's VLOP status.

Alex:

Which is, it's fascinating that they're catching, they have a couple of marketplaces in here, right? Which is really, when you think of the DSA, you're not thinking of the Apple App Store or shopping sites, even though they have user generated content in the form of reviews.

But it seems like a little over, obviously, with Digital Markets Act applying from a competition perspective makes a lot of sense. But of the core abuses people care about on the internet, while I am not super happy with how Apple app reviews work, I also don't think it merits this level of, the risk analysis and stuff. The odds of an Apple app reviews helping support a genocide in the developing world, pretty low, right? And so, to hold them on the same level as Meta or TikTok seems weird to me.

Evelyn:

Yeah. And I mean, also in their LinkedIn, Pinterest, Snapchat, complying with the same level of obligations.

Alex:

Right.

Evelyn:

People have often talked about the problems of anti-competitive effects potentially of applying these heavy obligations to much smaller platforms. But all of this is still a big open question mark of how it's actually going to play out in practice. And it's going to be an exciting 12 months to find out, so.

Alex:

I mean, one of the fascinating ones is it applies to Wikimedia Foundation, to Wikipedia, which is a nonprofit project, is supported by funders. 99% of the work that happens on Wikipedia happens by volunteers. And so, they don't have a huge compliance team sitting around to go handle this.

In other news, the Wikimedia Foundation pushed back against the UK, because the proposed UK Online Safety Bill would create obligations around age verification for Wikipedia, that they said that are incompatible with their privacy guarantees.

And so, it will be interesting to see how they respond to this, whether or not they tell the EU of like, "Eh. We're not going to do." I'm just not sure how the EU would enforce anything. I guess they could ban donations from, but they're not going to block Wikipedia or force Wikipedia to block European users. So, that should be an interesting one of what happens there.

Evelyn:

Yeah. So there are fines under the DSA up to 6% of global annual revenue. As you say, that's not exactly going to bite here, but again-

Alex:

Well, it's quite possible Wikimedia Foundation has no bank accounts in Europe, unlike any of these other organizations where they employ people in Europe, they take ad revenue in Europe. There is a good argument, this is a situation in which the EU is really stepping on the toes of everybody else around the world, of trying to regulate what is effectively a global service run by volunteers.

Evelyn:

Yeah. I mean, one of the things that this structure is supposed to do is supposed to have adaptability to have, you know, people are going to do risk assessments based on how their product works. And so you're not exactly expecting exactly the same thing from Wikimedia as you are from TikTok or whatever.

But no one really has any idea how that's going to play out in practice and what these risk assessments are going to work out, and who's going to do these audits, and how they're going to look. So, it's going to be exciting times, full employment program for European platform lawyers, if you're interested.

Alex:

Has the EU ever done something that reduced the amount of billable hours charged by... There's definitely a feedback loop here of everything they do is just to the benefit of European lawyers, even if it doesn't have any actual positive effect on the ground.

Evelyn:

Yeah. And this is also auditing firms and all sorts of people. And we're going to see a whole bunch of independent dispute resolution industries pop up as well, because that's another thing that these platforms have to provide as part of their obligations too.

Alex:

Well, the audit, I'm glad you brought up the word audit. As a cybersecurity professional, an audit is a test against a known baseline, right? There is no baseline defined by anybody. There's no ISO standard. There's nothing out there that you can test your trust and safety systems against.

And so, you're going to have the creation of these baselines by different audit companies, that are going to be complete BS. And so, one of the things that's going to happen here is we have a bunch of companies, we saw this in the early days of PCI DSS. It got a lot better, but the credit card industry actually had a baseline, but you could choose your auditor.

And a couple of companies, that I won't name for defamation purposes, ended up becoming by far the largest PCI auditors because they got the reputation of being the loosest QSAs, or qualified security assessors.

But anyway, I think you just have the same thing here, where somebody's going to make a ton of money selling a quote, unquote, human rights audit and a trust and safety audit, based upon a baseline they've created that anybody can pass. And because the EU hasn't defined any kind of actual baseline standards here, there's not much they're going to be able to do around that.

So, again, like you said, there's a lot of billable hours are going to go in to creating a lot of paper, that don't actually affect the truth on the ground.

Evelyn:

Right. I mean, you're going to have the Big Four probably launching these. It's a business opportunity for them potentially.

Alex:

Right.

Evelyn:

And there's no sort of, I mean, do we think that Ernst and Young or PricewaterhouseCoopers have all of this built up expertise on what trust and safety auditing should look like? No.

Alex:

And there's nothing, I don't think the DSA requires them to be like an AICPA firm, right? So it's not like any of the post-Enron rules that required firms to belong to AICPA to have standards around what their audits do. So you're going to have, there's this whole second tier of audit firms that can't do the really big public company stuff that are going to make a lot of money here.

Evelyn:

Right. It'll be interesting to see. I have no idea how all of these dynamics are going to play out. So, that's going to be a fun summer. I actually want to dig into it a lot more over the summer, on the podcast, get some people from Europe on to talk about the DSA and compliance and what they're seeing. Because I think it's going to be a fascinating case study, at the very least, in social media regulation.

Speaking of social media regulation gone right or wrong, let's go to Montana where it seems like maybe someone in Montana finally spoke to a First Amendment lawyer, because the TikTok Ban Bill, that we've spoken about on the podcast before, that was the one that explicitly said we're just banning TikTok, and then had this great section at the start that just said, "We're banning it because of the NyQuil chicken and the Milk Crate Challenge videos." That act, the governor looked at it.

Alex:

We are making a decision based upon speech.

Evelyn:

Really protected stupid speech, that is the stuff that we want to ban. That one, that bill has been held up because the governor took a look at it and is seeking amendments to make it broader. So it's no longer going to be applying specifically to TikTok, but it's going to be applying to all social media applications that provide certain data to foreign adversaries.

So, it's less Bill of Attaindery, in terms of just seeking out a specific company and trying to punish it. But it's still going to run into a lot of the same First Amendment problems, even if they've deleted that section from the Act, because it's bringing a sledgehammer to crack a nut in terms of banning a platform if you're concerned with certain kinds of illegal content. And it's also now going to be insanely broad, where it's going to apply to any social media application that collects personal information or data, and provides that data to a person or entity located in a country designated as foreign adversary.

So, any app that provides personal information to another person in a country like China or Iran, that's going to cover most social media platforms, given that they have users, many of them have users, in those countries. So this, again, is a vape-like situation, as these state try and work out how to regulate social media and continue to fail.

Headed to The Hill now, this week, yet another bipartisan child safety bill was introduced into Congress, the Protecting Kids on Social Media Act. So, this act would ban kids under 13 from using social media and require parental consent for kids 13 to 17, and ban recommendation algorithms for minors, or at least that's how it's being marketed. That has four co-sponsors, Democrats and Republicans, Brian Schatz, Chris Murphy, Katie Britt, and Tom Cotton, who they are proudly boasting, "We are promoting this bill and we are also the parents of young kids." Which I can only assume means that they're incredibly sleep-deprived, and therefore not thinking straight as they propose this bill.

I don't fully understand it. The ban on under 13 is that they cannot create an account, but they're still allowed to view these platforms. Video game platforms are specifically exempted from the platforms that it applies to. Algorithmic recommendations can still be used as long as it's based on the context, including what the individual has viewed, as long as it's not based on personal data. But, I mean, just mainly-

Alex:

Right. Which is the real problem.

Evelyn:

Yeah. Right.

Alex:

The real thing that's hurting children is recommendation algorithms or people doing ads, selling ads to kids. It's just so completely disconnected from what is actually harming children online, and the whole 13-year-old, if they've got kids, they understand any kind of age verification scheme is either going to have to be totalitarian, you're going to have to have people showing ID to get a code to sign for an account, which is, that works. It's how the People's Republic of China does it, right? Or it's going to be trivially by-passable. Right?

I'm really disappointed, honestly, in Brian Schatz here, because he has done some reasonable things and been a good voice on some stuff. Anyway, sorry. Sorry to interrupt you, but this kind of crap-

Evelyn:

No.

Alex:

... just drives me insane.

Evelyn:

No. I mean, I can't really work out what's going on here because, yeah, they say, "Oh, don't worry, nothing in this would require government issued IDs to be used for verification," which is like, okay, but give us another, they don't really have a good solution for the age verification problem.

Alex:

Right.

Evelyn:

But it's just the complete lack of concern for kid's speech rights, and kids in these conversations, at all, which is pretty astounding to me. I don't understand how there isn't more awareness of requiring a 15-year-old to get parental consent before they can use platforms that they use for education, for talking to their friends, to being part of their social community, including, the obvious example of an LGBTQ kid in a household where that's condemned. The idea that it's getting in the way of them accessing social media, just no concern at all for these kinds of equities. And it's pretty astounding. I'm pretty disappointed as well.

Alex:

Look, if you want to make the argument that it would be better for kids, if we went back to the '90s, and we all had our clear plastic phones plugged into the wall, and your parents yelled at you for talking to your friends all day because they're waiting for a call to come in, and that we didn't have all these social pressures, I'll accept that fact, right?

I would accept if we could roll back the world, but that's not the world we live in. And kids are not going to go quietly into that good night. Passing any legislation is not going to all of a sudden 14-year-olds, who have built their entire social life online, to decide, or 12-year-olds or 11-year-olds, that they're just going to be okay with this.

And so, as a result, anything you do is going to have to be incredibly aggressive in clearing out teenagers using social media. And there's no, absolutely, no way that the country actually has a stomach for that, even if it passed constitutional scrutiny, which I'd imagine you don't think would be happening.

Evelyn:

No, I don't. And it's why I find the video game platform carve out also particularly hilarious, because it's like we've moved from one moral panic to the next one. I don't know whether they've done that because the ship has sailed on convincing people about video game platforms, or because there was a law that tried to restrict minor's access to video games, and that was struck down as unconstitutional.

Alex:

It's because Brad Smith, the President of Microsoft and constant presence on The Hill, earns every single dollar he has paid. And Microsoft is one of the big companies here, right?

So, one of the things you have to understand is that there's a bunch of ID verification companies, and age verification companies, that are clearly pushing this, right? There's a bunch of lobbyists on The Hill, that two or three companies want to be king mate, that all of a sudden they would be $50 billion companies if one of these things passed.

And so, I think that's part of what we're seeing here is very effective lobbying by a small number of corporate entities to say, "You need to require every big company to use our services."

Evelyn:

Right. That makes a lot of sense actually, because the key case in this area, Reno v. ACLU, where this was around a child, the original internet case back in the '90s about restricting minor's access to harmful material.

One of the things that case hinged on was the idea that there wasn't good age verification technology available at the time. And the court said that this is going to have way too much impact on freedom of expression online, including adult access to materials. And so, the idea that there are these companies now who's saying, "Times have changed, we have this technology now," would, I guess, open that door. But query the effectiveness, and like you said, the data collection that would be involved in running these systems.

And the other big development this week, somehow, out of nowhere, Bluesky exploded. So, this is a now super popular, or at least if you're on Twitter, it seems to be super popular. I am not on Bluesky, so I know very little about it.

Alex, are you one of the cool kids that's on this new, what I understand, is like an oasis in the barren desert of social media?

Alex:

I am reluctantly one of the cool kids. I have not updated my profile. I have not post anything and still people are following me. Which also kind of comes back to the authentication issue that we have with all these different platforms, is that you have no idea that, actually, that Stamos at Bluesky is actually me, right?

I even changed my handle from Twitter. Who knows? It could be John Stamos, right? That'd make much more sense.

Evelyn:

Have you considered?

Alex:

Well, that's possible. Yes. Yeah, it's true. Why is the Full House fan club following me?

Yeah. Bluesky, I mean, it's been an interesting week for Bluesky. All the crazy blue check mark stuff drove people off. A lot of the nerds have moved to Mastodon. A lot of my community is on Mastodon, and that's where I'm spending most of my time these days. But the less nerdy people have been looking for something simpler.

Mastodon is a mess from a product perspective. It is hard to use, hard to sign up for, people do not understand federation. And so, Bluesky looks exactly like Twitter, to the point of where, I don't know if you have an opinion on design patents and IP law, but I think they're probably pushing the edge because it looks exactly like Twitter. The font sizes, the interaction, design and such.

In theory, Bluesky is supposed to be federated. They use this thing called the AT Protocol, an incompatible protocol from ActivityPub, that has some cool ideas around identity and stuff. We can cover that in the future, but there's some cool ideas here about splitting content and metadata, about having a cryptographically-signed identity that you can move to different platforms. I think those are cool ideas.

Right now, it is not really federated. It is really just all running on Bluesky. And the big thing we learned this week is Bluesky effectively has no trust and safety mechanisms. People are flooding to it and saying, "Oh, this is this wonderful oasis." It's a wonderful oasis because it has a very limited set of invite codes that were being handed out to kind of old blue check influencers, right? And those people were inviting their friends and they're not inviting trolls and they're not inviting people on [inaudible 00:23:12].

And so it seemed great, but not because of any moderation. And we figured that out because, unfortunately, those invite codes did not have enough entropy in them. And so, people were able to force them, and a bunch of trolls flooded the site and ended up attacking people and saying things, and there was nothing they could do. They didn't even have blocking before that happened.

So overall, I think Bluesky launched at least three months too early, and there's this huge rush to it. But they're not ready because they don't have any of the trust and safety stuff figured out.

Evelyn:

Yeah. The idea that all of these people are installing a platform that didn't, until very recently, have blocking is quite amazing to me. It's the fundamental safety tool for people to get harassed online, is the ability to block people.

Alex:

Yeah.

Evelyn:

I mean, what do you think about this idea? I've been getting questions from reporters about this as the future, in terms of decentralized. What about the content moderation problems that are going to occur once this does get more fully off the ground?

Alex:

There is a fundamental problem for all decentralized social media, in that who is responsible for trust and safety is going to be very complicated. Right?

Now, one of the interesting ideas is that you can effectively pick and choose your trust and safety provider. So, if you want to have a situation in which you can see all kinds of crazy stuff and you're fine with it, and you can see nudity, and people can send you spam, and you'll just block people, but you don't want any of that prevented ahead of time, then the federated world's probably great for you.

If you want to have a really controlled situation, then we don't really have a good example yet of that in the federated space. And Bluesky at least has money in VC. I think this is going to be also a significant challenge on the Mastodon side because a lot of these servers are being run by volunteers, and people get pretty burned out pretty quickly of handling queues of hundreds or thousands of moderation requests.

The other issue here is that, at least on the Mastodon Fediverse side, the tools available to admins are really bad right now. And there's almost no proactive tools. And so, this is something that actually our team at SIO is working a bit on, with one of the large nonprofits that's getting into the Mastodon space. Because just basic things that are super basic elsewhere, like scanning for CSAM, doing basic detection of people sending death threats to each other and stuff, none of that exists on Mastodon right now. And so, it's a very manual process and people are going to get burned out.

So, it is a fascinating question of, do the upsides of being able to choose what kind of trust and safety regime you live in going to be worth the fact that distributing it makes it much harder for people to, and the economics are much more difficult, because you have lots of nonprofits? Again, Bluesky is a for-profit company, I believe. Or a public benefit corporation, but they've got money, right? A bunch of people invested money, including Jack himself. And so, they really don't have excuse to not hire folks.

The good thing for them is a ton of good people are on the market right now, because Twitter has fired most of their trust and safety people. Meta just had another round of layoffs that cut deeply into trust and safety, like you and I discussed last week. And so, there's a lot of great people out there. So Bluesky, at least, they have the opportunity to hire some good folks.

Evelyn:

Yeah. I mean, I think the thing that the decentralization really shows acutely is the trade-offs in this area. We don't talk about them enough. If your number one problem with content moderation is too much centralized control in the hands of one person, okay, then decentralization is going to solve that problem.

But allowing people to pick and choose their own content moderation standards isn't necessarily going to solve the CSAM problem, or the echo chamber problem, or the polarization or extremism problems. And so, it's talking about those trade-offs more openly would at least be more useful.

And that, I think, is it for the week. Alex, you teased some sports news. What's going on?

Alex:

Oh. So very, very sad day. Game seven between my beloved Sacramento Kings and the Golden State Warriors, which is my number two bandwagon team that I've... I've seen, I mean, I've gone to Golden State games since I went to college in the East Bay. But the Kings are my first love and the team I watched since I was a little kid, and my dad took me to games.

Went to game seven. An amazing series, one of the best series I've seen in the NBA. But unfortunately, the Warriors are just too good, especially Steph Curry. Steph Curry put in a amazing performance, now has the record for point scored in a game seven. And Golden State is going on to hopefully defeat the Lakers.

So, I was very sad about it. But it's a good run and it's a young team. Unlike the Warriors, Sacramento is on the upswing here, and I think there's going to be great things from them in the future.

It was also an interesting experience because I was actually, if people are interested, I'm on another podcast today, this little podcast called This Week in Tech, which I'm sure is 20 or 30 times larger than us, 50, a hundred times. But I went up to-

Evelyn:

It's quality. It's quality, not quantity.

Alex:

Yeah, exactly.

Evelyn:

So-

Alex:

I went up to Petaluma to record that in their studio, and got to watch the game, and I was wearing my Sacramento shirt. They have video, unlike us. So, yes. But I had to put the Sac shirt, the King shirt back away for a year. But it will be back. And maybe next year, you and I will have a studio and can do a live stream on TV, while I wear a King shirt and game seven's going on.

Evelyn:

We can make it happen. Well, I'm very, very sorry for your loss. And thank you so much for picking yourself up from this hard moment, because the show must go on. And our very, very high quality listeners and I appreciate you showing up for work today.

Alex:

Yeah.

Evelyn:

Regardless, so-

Alex:

Right. Well, it's going to be a rough Monday because we're entering, for my class, the dark period of the class. So I have two lectures on child sexual exploitation, suicide, and terrorism. So, it's the real, unfortunately, I was hoping to come in. I guess, it's probably thematically better that I'm not super excited coming in and talking about OCSE. So, I guess, it fits that.

Evelyn:

You'll really have the mood down, setting the stage. All right.

And with that, this has been your Moderated Content Weekly Update. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent.

This episode wouldn't be possible without the research and editorial assistance of John Perrino, of the Stanford Internet Observatory, and it is produced by the wonderful Brian Pelletier. Special thanks also to Justin Fu and Rob Huffman.

Talk to you next week.