Moderated Content

An Investigation into Self-Generated Child Sexual Abuse Material Networks on Social Media

Episode Summary

Evelyn and Alex talk to David Thiel and Renée DiResta, Alex's two co-authors on a report released by the Stanford Internet Observatory last week with findings from an investigation into the distribution of illicit sexual content by minors online. They talk about the findings, how social media companies should be and have been responding, and the public and political response to the report.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos are joined by Stanford Internet Observatory (SIO) Research Manager Renée DiResta and Chief Technologist David Thiel to discuss a new report on a months-long investigation into the distribution of illicit sexual content by minors online.

Large Networks of Minors Appear to be Selling Illicit Sexual Content Online

The Stanford Internet Observatory (SIO) published a report last week with findings from a months-long investigation into the distribution of illicit sexual content by minors online. The SIO research team identified a large network of accounts claiming to be minors, likely teenagers, who are producing, marketing and selling their own explicit content on social media.

A tip from The Wall Street Journal informed the investigation with a list of common terms and hashtags indicating the sale of “self-generated child sexual abuse material” (SG-CSAM). SIO identified a network of more than 500 accounts advertising SG-CSAM with tens of thousands of likely buyers.

With only public data, this research uncovered and helped resolve basic safety failings with Instagram’s reporting system for accounts with expected child exploitation, and Twitter’s system for automatically detecting and removing known CSAM. 

Most of the work to address CSAM has focused on adult offenders who create the majority of content. These findings highlight the need for new countermeasures developed by industry, law enforcement and policymakers to address sextortion and the sale of illicit content that minors create themselves.

Front-Page Wall Street Journal Coverage

Bipartisan Concern and Calls for Social Media Regulation 

The investigation sparked outrage across the aisle in the U.S. and grabbed the attention of the European Commission as the European Union prepares to enforce the Digital Services Act for the largest online platforms later this summer.

In Congress, House Energy and Commerce Democrats and GOP Senators were most outspoken about taking action to address the concerning findings.

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Alex Stamos:

If you're talking about hate speech, if you're talking about disinformation, if you're talking about anti-vax content, prevalence is a good measure, because that is content where what you're trying to do is you're trying to reduce its spread among the people who don't want to see it, right? If people don't want to see hate speech, you don't want them seeing hate speech. This is a criminal conspiracy between buyers and sellers on this platform, and so whether or not some random person sees it is not relevant to whether harm is happening, so it is a completely inappropriate metric to use and just really a bizarre one.

It's also strangely high. One in 10,000 is not a good number, right? When you're talking about Meta overall having 3 billion users, one in 10,000 is not a good number from people who get exposed to CSAM. I also don't think they did a very good job of framing up that number, because you would expect that, honestly, to be lower.

Evelyn Douek:

Hello, and welcome to Moderated Content Weekly, a slightly random and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek, and Alex Stamos, and we're doing something a little bit different today. This week, the Stanford Internet Observatory released a report which Alex co-authored along with two colleagues, David Thiel and Renee DiResta, and the report is about the cross-platform dynamics of self-generated CSAM, or child sexual abuse material. This is an important and substantial report that generated a lot of coverage and a bunch of responses that I think are really important, so we're going to spend the whole episode today talking about that report, and we are lucky enough to have all three co-authors here to do so. David, let's start with a basic overview of what the report's about and what you found.

David Thiel:

Sure. We got an external tip wanting verification of some findings that a reporter for The Wall Street Journal had been examining. These were mostly based around a set of hashtags that were being used to market the sale of CSAM that, as near as we can tell, were generated by the kids appearing in it themselves. While some of these things might have been impersonators, scammers, things like that, the majority of this network essentially looks like kids between 12 and 17 generating their own explicit material and then marketing it for sale, primarily on Instagram but also Twitter, and it also spans a bunch of different service providers in a number of different ways.

On Instagram, essentially we built some data-gathering tools, essentially browser-based ingest, since there's not a straight-up Instagram API. We gathered as much account metadata as we could. The network on Instagram is mostly modeled after adult content creators, so they actually know to follow Instagram's content guidelines in terms of visual presentation and material. Most of the stuff there is not what you would call safe work, but it's not explicit. The material shown is not illegal.

On Twitter, we also built out an ingest pipeline just to examine metadata. In this case, we put up a bunch of additional guardrails to what we would normally have, because on Twitter, because posting explicit or nude material is allowed, we wanted to make sure that we didn't save any of the media because of the risk of saving that explicit or illegal content. Instead, we passed it through PhotoDNA and other analysis tools and just dropped all of the media on the floor.

In terms of what we found overall, on Instagram, it's a fairly sophisticated operation. These kids understand how content enforcement works. They understand how to juggle accounts correctly. They know how to advertise using stories, do networking via stories, shout-outs to other accounts, remarketing accounts when they reappear. Like I say, it's not quite as big on Twitter but it's potentially a similar level of harm, given the possibility of distributing explicit content. Also on Twitter, one of the things that was notable is that we did get a fair number of PhotoDNA hits in our automated ingestion examination infrastructure, and that should never have been happening.

Evelyn Douek:

Right. This is something we talked about last week, about PhotoDNA is the database of known pre-identified images of child sexual abuse that should be automatically checked with every upload.

David Thiel:

Yes. A smaller network, as near as we can tell, on Twitter, but also a similar level of harm when it comes to the severity.

Evelyn Douek:

The details of the report, I mean, they're obviously very challenging and a lot of this stuff is pretty horrific, so I don't really want to dwell on the specifics. To give listeners an idea of how not-subtle this is and easy it is to find, can you talk a little bit about some of the search terms and the hashtags that were popping up, to give listeners an idea? Is this something that's really difficult to dig up, or is it something that if you're looking for it, is pretty easy to come across?

David Thiel:

I would say it's quite easy to come across if you were looking for it. Some of the earliest ones that we were examining were just essentially pedobait, #pedobait, with the E swapped for an X. This was both on Twitter and Instagram. You could just type in hashtag P-X-D-O and get a bunch of suggestions about other hashtags that you might want to check out. A lot of the others were of a similar blatant nature. Others started developing these codes over time, working from acronyms and then weird variations on those acronyms and character substitutions.

As these get actioned over time, all of that stuff evolves, and that's one of the things that it seems the platforms are having a hard time with. What seems to happen a lot of the time is that they find a hashtag, it's bad, so they say, "Okay, we're going to block that one," and then you have a one-character variation on that hashtag or an additional letter at the end, and that will just stay live for weeks.

Evelyn Douek:

The report really highlighted Instagram and the role that their recommendation algorithm plays in the growth of these networks on Instagram. Can you talk a little bit about that? How does the recommendation algorithm promote this material?

David Thiel:

Yeah. We primarily looked at this network by a search-based mechanism, not so much by reels or things like that, but on Instagram, the recommendation system is extremely effective at what it does, and it means that if somebody goes to one of these accounts and follows a handful of them, they will get nothing but suggestions for other accounts in that network, and in a way that it's easy to go to their Recommended for You screen or page and get 40 or 50 seller accounts recommended, and in some cases, buyer accounts as well.

Most of the initial work that we did was not based off of that UI. We were pulling, manually, follower networks and examining them in Maltego, which gave us a similar effect to recommendation engines. For most users, it's made to grow a social graph very quickly, and once it has a sense of what somebody is looking for, it's very effective at doing that.

Twitter has similar problems, although the way that they work right now is they'll recommend a handful of accounts that seem related to who you're looking at, and then a bunch of ... I don't know if they're actually sponsored, but they're very big, prominent accounts, "Would you like to follow the NBA," or something like that. It branches out to other topics rather than staying laser-focused.

Evelyn Douek:

Renee, can you talk a little bit about the multiplatform nature of how these networks manifest and how that affected the research that you were doing?

Renee DiResta:

Sure. One of the things that happens is that you can use a major platform like Instagram, as David notes, to facilitate discovery. You can grow your connections between, in this case, sellers, or also making it possible for buyers to find sellers more easily. Instagram was particularly effective for that, and that was one of the reasons why this network was really very much focused on Instagram.

You do see, as with many other types of issues, redundancy, where creators will branch out to other platforms. One reason for this is, of course, if you are likely to be moderated in some way, you might want to establish a presence on a platform like Telegram that moderates very, very little. There's a insurance policy, that you can direct your followers to also connect with you on these platforms that are not very likely to take you down at all.

I think we can talk a little bit about that kind of regulatory arbitrage and content moderation a little bit later. Other reasons for the cross-platform nature are also that Instagram doesn't necessarily provide the full-stack infrastructure that something like OnlyFans would, meaning your ability to actually exchange the files or receive the compensation.

This dynamic meant that we would also see filesharing services ... I think we mentioned Dropbox in there, I think there's a few others ... in which the sellers would post Instagram stories sometimes documenting, "Look, here's evidence of me sending material to buyers on these platforms," a show that they were serious, that they were actually transacting, and that the buyer was not going to get screwed in some way if they paid and then didn't receive files. There's that social proof dynamic, in which they expose the fact that they're using these other platforms for those file exchanges.

Then on the compensation front, gift swap card exchanges like G2G we mentioned several times. In the report, we describe ways in which there's common usernames on these different sites, between the Instagram username and then G2G. Payments are handled through a range of platforms, either via gift cards, or things like Amazon wish lists and other things are another way in which money is exchanged for the material.

Evelyn Douek:

Yeah, let's talk a little bit about the regulatory content moderation, arbitrage that you talked about, because I think that's really interesting. You do this comparison between the policies on the various platforms and how they compare. The report I think really highlights this difference between policies on the books versus enforcement and the gaps there. Can you talk a little bit about that?

Renee DiResta:

Sure. I think this is a phenomenon that happens on so many different topic areas at this point, where if you have a platform that is more mainstream and catering to mainstream sensibilities, CSAM is unique in that it is illegal, but you do see this effort to hop to the platforms in which you are least likely to be moderated in so many different areas on the trust and safety front at this point. I think maybe, Alex, you'd want to tackle the Telegram thing in particular, because their policies are pretty extraordinary, I would say, in how little they indicate a willingness to do.

Alex Stamos:

Yeah. One of the interesting things that Telegram is being used for is as the place where, once Instagram is used, as David was talking about, as a discovery mechanism, so Instagram recommends to you, "Here is a child or somebody pretending to be a child selling CSAM," they will then pivot you into an Telegram group to arrange for payment as well as to actually move the content. They'll be very careful for the CSAM to not touch Instagram where it could possibly be caught and scanned, but to do it on Telegram. We've always known that Telegram's had a child safety problem. From this it's become clear that, one, Telegram is really a centerpiece of the relationship management between the commercial sellers and the buyers. It is a platform that can be used for them to safely move CSAM without getting caught.

If you look at the policy, something David noticed while he was doing this work is Telegram explicitly says that you should not post CSAM on public surfaces, but they say public. It's like they go out of their way. They don't say, "Don't post illegal stuff anywhere." They explicitly put a modifier in there of like, "Don't post it on public surfaces," and then leave it dangling for private, which is effectively a wink-wink, nod-nod that you can use private surfaces in Telegram ... which are not actually end-to-end encrypted in some cases, in many cases ... to move child sexual abuse material and arrange for the payment for that content.

David Thiel:

Yeah. They subsequently go on to brag about how they've given no information over to governments across the world. It's weirdly leaning into that as a marketing tactic.

Evelyn Douek:

Right, okay. Not all platforms are leaning into this as a marketing tactic, and not all platforms are hashtag, so let's talk a little bit about platform responses then and what they've done in light of this. There's one quote that I wanted to pull out and ask about. We've seen Instagram has said, Meta has said, it's setting up a task force, and I'm curious to get your thoughts on that. One of the quotes they gave to The Wall Street Journal about this story was that it said that its internal statistics show that users see child exploitation in less than one in 10,000 posts viewed. I'm curious to get your reaction to whether that's a good way of thinking about this kind of harm, like saying, "Oh, this is a very prevalent across our platform. Only one in 10,000 posts viewed is child sexual abuse material." Obviously, aside from the tone-deaf nature of putting it that way, what's the more underlying problem with thinking about it in those prevalence terms?

Alex Stamos:

Yeah, I was a little surprised to see that, and that response got a lot of condemnation from journalists. It's like Facebook had time to go think about, "What is our response here," and that's the response they give. Prevalence, as you and I have discussed multiple times, Evelyn, is a good measurement on a number of trust and safety issues. If you're talking about hate speech, if you're talking about disinformation, if you're talking about anti-vax content, prevalence is a good measure, because that is content where what you're trying to do is you're trying to reduce its spread among the people who don't want to see it, right? If people don't want to see hate speech, you don't want them seeing hate speech. This is a criminal conspiracy between buyers and sellers on this platform, and so whether or not some random person sees it is not relevant to whether harm is happening, so it is a completely inappropriate metric to use and just really a bizarre one.

It's also strangely high. One in 10,000 is not a good number, right? When you're talking about Meta overall having 3 billion users, one in 10,000 is not a good number from people who get exposed to CSAM. I also don't think they did a very good job of framing up that number, because you would expect that, honestly, to be lower.

Evelyn Douek:

Right. Okay. Let's talk about the platforms' response and how this happens more generally. I mean, the three of you are extremely capable and excellent researchers, but there are three of you, and you found these networks and wrote up this report about them. Meta is this multibillion-dollar company that has many thousands of people working on this stuff, so how does it happen that this is existing on their platform? David, maybe you can talk a little bit about what have you seen in the fallout and the response in the days since you've released this report.

David Thiel:

Sure. With companies like Meta, they're very focused on trust and safety practices that they can scale up to a ridiculously high level, which makes perfect sense. The problem is that if you were trying to build a globally effective child safety systems, those focus on your known threats. They focus on do older men seem to be messaging a bunch of people who we've classified as being particularly younger users. Are they getting blocked a lot? Is media getting exchanged? That all makes sense in say a sextortion context, grooming, some kind of child predation. It was never made for a dynamic where you were addressing kids marketing their own content. It's a very different behavior. They're actually looking for customers, effectively.

I think those large-scale classifiers just don't actually apply to this type of network, which is understandable, but part of the problem, when you only rely on these systems that you've built up for very broad-scale, multilingual enforcement, is that some really obvious stuff can sneak in there. You see this a lot, where a reporter will just go and say, "I searched for child abuse content by searching for child abuse content or something, and I got a bunch of results." That's a thing that most of these companies do not seem to have. They don't have people trying to actually discover new dynamics on that network, things that are gaining in popularity. It's a mix of this large-scale enforcement along with reactive measures.

Now, in the case of Instagram, we did find ... or after publishing the report, Meta found ... that they had some problems internally that were exacerbating this issue. They had some stuff that was supposed to automatically help triage reports of child abuse material, and those things appeared to be broken. There were some enforcement things, like the clickthrough when people were searching for hashtags was broken.

Evelyn Douek:

Can you just describe that? This was a pretty extraordinary part of the report, so it's worth spelling out.

David Thiel:

Yeah. Instagram has built this system where if there's content that is potentially harmful, they've set up this system to basically say, "Hey, we're going to describe how content under this hashtag is potentially harmful," and clickthrough, "Do you want to see the results anyway?" This was a generic system that was built for things like eating disorders, where if you got pro-anorexia content, you're going want to notify people that this might be triggering or not good for them. Unfortunately, this same UI was mapped towards something that's completely illegal. Yeah, searching these very obvious hashtags that are related to the trading of CSAM material, people were offered to see the results anyway.

Evelyn Douek:

Clickthrough anyway, right.

David Thiel:

This is something that was there for, as near as we can tell, many months, and there just isn't someone or a team of people that is working to see how easy this discoverability mechanism is, how the techniques and hashtags are changing over time. There's just not this proactive manual investigation aspect.

Alex Stamos:

At least there wasn't. They're basically promising to do that. They've announced this task force, which is interesting. I mean, you don't hear about a task force too much. I guess this is a follow-up to the war room.

Evelyn Douek:

I was going to say, so war room has launched a task force, and then yeah, there'll be a whole-of-platform response to this problem any day now.

Alex Stamos:

Yeah. They are pulling together with network disruption, so effectively the goal is to gather up all the hashtags, all the bad accounts, and nuke them all at once, to make it hard for them to be recreated. As David talked about with the algorithm, one of the challenges Instagram has here is if they squash an account, that person can come back, re-friend the people who have survived, and then the algorithm will re-promote them into the network.

Effectively, the recommendation algorithms here are so strong that they create a self-healing effect for these networks, of if you squash individually, you have to nuke the whole thing at once. Hopefully that's happening on other platforms. Twitter fixed the problem, so the CSAM scanning, after a while. They then cut our API access off, so we can't really speak as to how they're doing now. Thanks, Twitter. We do appreciate us doing free quality control for you and being thanked by you cutting this off.

A number of other platforms have reached out to us about wanting to hear more. We're looking to present our research at an event this summer where all of the tech companies will be together, at least the tech companies that still care about this stuff. Apparently Twitter no longer interacts with the Tech Coalition, so Twitter might not be there, but most of the companies that are involved here should be together. Part of our goal here is to try to get some cross-platform work.

This work was also reported to the FBI via NCMEC, so we do expect there to be some arrests sometime this year. That process obviously takes a lot longer. Like Renee was talking about, you're not talking about criminal masterminds here. A bunch of these people were using their real Instagram accounts, their real Facebook accounts with photos of them, to go buy this content. It's not going to require a lot of incredible Sherlock Holmes-ing to figure out who some of these guys are.

Evelyn Douek:

How has this report been received more broadly than from the platform responses? How has this fed into the broader discourse we've seen? I saw a lot of press coverage about this, so congratulations. I really think it's landed and had an impact, and obviously got the platforms' attention. More broadly than that, how's the report been received?

Alex Stamos:

Yeah, I'd say it's got a lot of attention. A lot of the thanks for that goes to Jeff Horowitz at The Wall Street Journal, who was the primary driver of that. The Journal made this a big article with a big photo of me, to David and Renee making a lot of fun of me.

Evelyn Douek:

I loved the glamour shot.

Alex Stamos:

Well, right. Looking back, I'm really glad they didn't use the picture of me smiling in my office. Instead, it's me looking off in the middle distance pensively. Yes, I do apologize to my co-authors for that, but The Wall Street Journal gave it the big treatment. They put it on the front page of the website. It was on A1 the next day, above the fold in the paper, which is pretty cool. They gave it big billing and it got a big impact. I think we talked about the political stuff, and I definitely want Renee's read on some of the political stuff.

I'll just mention two things that I've noticed in the response. One, the QAnon-ization of child safety is a real problem. Unfortunately, we published what we believe to be a well-documented, detailed, somewhat dry and unemotional exploration of a serious problem that's causing a lot of harm, with the goal of trying to get reasonable, rational, thoughtful responses, and a bunch of people take our report and now say, "Oh, look. It's true. Hillary Clinton has child sex slaves in the basement of a pizza parlor." It's like it makes it very hard to have a real conversation about the stuff that's going on when a group of people have turned this into a culture war issue, in which they believe everybody they disagree with politically is a child molester.

Since a lot of these people also believe that our group is a secret censorship machine, you can see the cognitive dissidence in a bunch of people. We were both praised, and then bad things were said about us by Steve Bannon in a livestream. It was a very bizarre feeling of, "Oh, these guys are great when they utilize their incredible secret censorship powers for good." It was just, "Good thing they're using their evil censorship machine against child molesters now." We've got a bunch of that, but the overall QAnon thing is really frustrating.

One of the biggest Twitter posts about this conversation is by a gentleman who ... not going to name him, but he seems a little bit like a grifter, I would say ... has not done a lot of substantive work, has written a lot of really shallow pieces on social media and stuff, and okay, that's fine. Then he writes this post about us where, one, he uses our report to say Elon Musk did a great job with Twitter, which is clearly, "Tell me that you didn't read the report," right, because as we talked about, there's the SG-CSAM problem on Twitter, and straight up PhotoDNA scanning broke, and that's a problem.

The second is him then thanking the Stanford Internet Observatory for following in his footsteps on child safety, which I just want to point out that I think when he was in high school, David and I were working on this problem together, to the point of where we worked on a couple investigations at Facebook where we were so aggressive that we actually got crap from privacy advocates and got written up as a scandal, that we were working so hard to prosecute people.

When I was at Yahoo, I constituted a new child safety team, and their investigations led to the arrest of hundreds and hundreds of bad guys and the release of a couple dozen sex slaves in Manila. That became such a big thing that the big Ninth Circuit case that might blow up the entire NCMEC system was based upon that work. That's back when this kid was 19 or 18 or something.

You have these grifters who have popped into this space, saying politically, child safety is on their side and therefore they own the space, but these people have never done anything substantive. They've never done any research that's useful. I've never seen them at a Crimes Against Children conference. I've never seen them at an ICAC meeting. It's just causing a real problem. If you talk to people in the child safety space, it's a real issue, because all of the surfaces that are supposed to be used for legitimate reporting of child safety problems are just flooded with the QAnon grifters and the crazy people, basically calling everybody a groomer and everything's bad. I think that's one of the issues I've seen.

It also demonstrated how Twitter's an information wasteland, because I respond to this guy saying, "That's an incorrect reading of this report I helped write," and my response gets completely buried because I'm not a blue check mark. What do you see? If you log in and you look at his post, you see all these people kissing Elon's butt because they're all willing to pay for blue check marks, or they're subscribers. They've paid for all the money. Then Elon retweets this guy with hearts, because Elon apparently didn't read the paper or read anything.

It just demonstrates. It's a great example from the inside of, if you're reading Twitter, the information you're reading about this reporter is wrong. That is not a place you can go for serious analysis of anything anymore, because for eight bucks you can completely distort the information ecosystem. You could become the main character of something you had nothing to do about.

Evelyn Douek:

Well, I'm sure it's being read way more carefully and with much more measured response in the political sphere. Renee, Alex teased and set you up to talk about this, and I'd love to talk about it a bit. I saw some lawmakers weighing in and retweeting the Wall Street Journal report. Tom Cotton said, "Social media isn't safe for kids. At a minimum, we should require age verification and parental consent." Rick Scott said, "Every parent should read this story. Social media is NOT SAFE," all caps, "for kids." What have you seen in the political sphere? How's this report been received?

Renee DiResta:

Well, I think for about seven years now maybe, since 2017 or something ... I'm trying to think of the exact dates ... we've had a lot of legislators who recognize that social media is profoundly powerful, and they benefit from that power quite directly as legislators. There's this interesting dynamic in which many will call for regulation, while then not actually doing anything to really advance regulation, and so it's great. Several Senators who will remain nameless have had fundraising pages up, "Donate to help me be tough on Big Tech," and then that translates to absolutely no action.

One of the ways in which this was processed I think was, yeah, I read it as a little bit of the same, but at the same time, I think there are maybe two policy areas where there is widespread consensus that something should change. I think kids are, of course, the number one, and I would say probably top of the list. There are many, many different bipartisan efforts at bills that have been introduced over the years. Senator Blumenthal and Senator Blackburn right now have KOSA in progress, the Kids Online Safety Act.

I would say the only other topic areas that really attract and capture public attention and bipartisan support have been sometimes data privacy, but that's a niche thing that's hard for the public to really get riled up on, and then we are seeing some momentum towards transparency. The Platform Accountability and Transparency Act was just reintroduced with Senator Romney as a co-sponsor now, so there's some bipartisan support there too. We're seeing these areas where there is bipartisan support and momentum to do something, and I think our report does emphasize some of the very real problems, very real harms, that kids can experience online on social media.

Then the question becomes what are the policy surfaces on which a regulator or lawmaker can introduce some kind of bill that aims to try to make the system better and safer for kids. I'm a mom of three, and I think that more needs to be done to protect kids online. I think I break a little bit from some of the other folks on our team in that regard. I am very, very, very much more sympathetic to things like stronger controls by default and to things like age verification actually than other folks on our team are, but it's a question of how and what is the way to implement those potential protections in the best way possible.

David Thiel:

If I can jump in, it seems like two of the things that people are likely to push or are already pushing in response, one being age verification, two being some push against end-to-end encryption that they've been keeping hovering for a long time. In my opinion at least, if we look just at Instagram for example, neither of those things would've really applied in this case. We're talking about people that are trying to advertise publicly.

We're talking about people that know the platform policy well enough that they're not posting anything that would require age verification. If you go to OnlyFans, you're producing adult content. They have to verify you. They have to get their 27 forms. In my opinion, that kind of age verification should be the norm on other platforms. I think if you're going to post explicit content on Twitter, you should be getting those consent forms, but people can disagree.

I think that we have to look at it from a few different angles, because just age verification or eliminating encryption would've done absolutely nothing to the network on Instagram. Other platforms, it might apply, but I think in either case, broad motions compelling companies to do these things would not actually be addressing the core of the problem in at least some of these circumstances.

Alex Stamos:

Like David said, we were talking about Telegram having policy. Even if Telegram appropriately encrypted group chats ... which they don't, but even if they did ... the thing they could do here is if you see people openly advertising or naming, in an unencrypted manner, a group that is going to be trading ... if you have a pedowhore name that's not encrypted and being advertised publicly ... then maybe you take that down. You don't need to break end-to-end encryption for that. In the end, even if all this stuff's encrypted, still for a network like this to happen, there needs to be a discovery mechanism. As long as a discovery mechanism exists for people to enter into the network, there's going to be a place that you can investigate.

I think this also demonstrates something we haven't talked about yet, which is this is also demonstrates a real failure by law enforcement, the fact that it is bad that Instagram didn't catch it themselves, but anybody in any ICAC or the FBI could have done this work themselves, and I think it does show that there is a massive underinvestment here. Before we talk about end-to-end encryption, before we talk about all these new laws and KOSA and all this, why don't we just fund both NCMEC ... to be able to handle things correctly, the huge volume of reports they get, to make sure nothing drops on the floor ... and the regional computer crime task force for them to go do their work?

There has been a bill here. Of all the stuff that's out there, Ron Wyden has a bill that funds this kind of law enforcement work, and then also fixes a couple little things about retention periods that are necessary for search warrants and stuff. Let's do that first. Before you do something crazy and impactful on the rights of hundreds of millions of people, you could just impact the rights of the criminals here who are openly advertising that they're taking these criminal acts.

David Thiel:

I think that, for both platforms and law enforcement, a lot of the work that needs to be done is just time-intensive, boots on the ground stuff. You have to actually do traditional investigation, and it would not help you to have an encryption bypass or something like that in this case.

Evelyn Douek:

I guess just to close, I wanted to ask you. You mentioned you've been working on this for a very long time and how surprised you were at what you were able to find, here we are, 2023, that this is the state. Whether you thought it was surprisingly easy to find this stuff or whether it didn't surprise you that this was the state of the findings, and how optimistic or pessimistic you are going forward that there actually going to is going to be movement to fix these kinds of problems.

Alex Stamos:

To be honest, I was surprised and disappointed in Meta here. When David and I were there, one of the teams that I supervised was a very good-sized child safety team in this just standard gumshoe work, where you go and you look and you find bad stuff, and you build a Maltego graph and you pivot and you pivot and you pivot and you put it all together, and you nuke it off the platform while you refer it to law enforcement. That's the work they did. Every single person I know who worked on that team has either left or been fired. In fact, the same week that this was happening, there were layoffs of people who work in safety investigations. I think, like David said before, he had the correct diagnosis, which is that Meta has massively pivoted into at-scale machine-based trust and safety to the detriment of human investigations, and it does not take a lot of human investigators to put a big dent in this.

I was surprised there. I guess I was less surprised that Twitter fell down, because Musk has decimated the team. Although it's a pretty basic thing for your PhotoDNA scanning to break, clearly there's a service that broke and there's a dashboard that nobody was looking at with a red light.

I'm also not so surprised but sad about the breadth of the platforms that are involved here, and how effective these networks are of creating their own OnlyFans, effectively. OnlyFans was not involved here. As we talked about in the report, we didn't find any of this stuff actually on OnlyFans, but the sellers here have created an OnlyFans equivalent by duct-taping together five, six, seven different platforms. I shouldn't be surprised, but the level of ingenuity is unfortunate.

I wish we were in a better place in 2023, but as I think Casey Noon talked about, referencing our report and some other stuff going on, we've probably passed peak trust and safety. You and I have talked about this, Evelyn, that the good years are behind us of the most investment in this area. I do think we should expect things to get worse, not just in the political sphere that we talk about all the time, but in basic trust and safety work like child safety.

Evelyn Douek:

I guess we also may not know enough about it. We might not be able to produce so many of these reports because, as you mentioned, Twitter's cutting off the API, and there's a general move to shut down a lot of transparency in Twitter's wake as well. I wonder whether your research, this kind of research, is going to get more difficult going forward as well.

Alex Stamos:

It definitely is, because obviously if you're selling or buying CSAM, you don't care about terms of service. As researchers at a hedge fund-slash-university with billions and billions of dollars and an IRB and a general counsel's office, we have to be careful legally. I do think it's a really unfortunate thing that the cutting off of data from legitimate researchers massively tilts the playing field towards the bad guys, who do not care about terms of service violations.

David Thiel:

I think that also it's not just Twitter in this case. They're just the most noticeable, because their APIs were really, really good and now they're gone. The other platforms have been very hesitant to work with researchers. They've been scaling back the systems that they use to provide to researchers to examine activity on their platform. I think that there has been a contraction overall not just in trust and safety investment, but also in any mechanisms that would allow people to discover problems in trust and safety on those platforms.

Evelyn Douek:

Okay. We will leave it there. Thank you, and congratulations on the great work and the enormous impact that you've had. I think it's really important. This has been the Moderated Content Weekly update this week. The show is available in all the usual places, including Apple Podcasts and Spotify, and show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn't be possible without the research and editorial assistance of John Perino, policy analyst extraordinaire at the Stanford Internet Observatory, and it is produced by the wonderful Brian Pelletier. Special thanks also to Justin Fu and Rob Huffman. Talk to you next week.