Moderating content for end-to-end encryption
Michael Cunningham
Feb 20, 2025

Sarah Scheffler, assistant professor of Software and Societal Systems and Engineering and Public Policy at Carnegie Mellon University, is currently conducting research that explores alternative content moderation methods for end-to-end-encryption (E2EE) that are still private and secure.
You may not be aware that while completing routine tasks online such as text messaging with friends, sharing a link with a colleague in a Zoom meeting chat, or accessing a website with an “HTTPS” protocol, you may be taking advantage of a cryptographic security method called end-to-end encryption, or E2EE.
E2EE ensures data is encrypted on a sender's device and remains encrypted until it reaches the intended recipient. This means that no third party—including service providers, hackers, or even government agencies—can access the data while it is being transmitted.
“The privacy and security that are facilitated through cryptography are all about keys,” explains Sarah Scheffler, assistant professor of Software and Societal Systems and Engineering and Public Policy. “Only parties that have a key can complete a certain function or task.
“What makes end-to-end encryption special, and separates it from other types of communication we do online, is that the ends of the communication, which means you and the person or people you're talking to, have the keys. But the server that is facilitating that communication does not have the keys. That makes it both more secure and more private.”
E2EE benefits privacy and security amongst users, but it also complicates platforms’ ability to moderate user content. Policymakers are concerned that encryption will make it impossible to detect, flag, and remove especially harmful content such as child sexual abuse material (CSAM), violent imagery, terrorist or extremist content, and misinformation.
To help navigate this trade-off, Scheffler is currently conducting research focused on understanding the landscape of child safety reports, identifying oversight challenges in moderation of violent extremism, building technologically verifiable transparency reports and cryptographic methods to verify the accuracy of content scanning, and exploring alternative content moderation methods for E2EE that are still private and secure.
“Our current approach to scanning for CSAM on unencrypted platforms is match-list based. The National Center for Missing & Exploited Children has a big list of all the CSAM that's ever been reported to them, social media companies report new CSAM to them, and that list is used to detect other CSAM content on their platforms,” explains Scheffler. “A similar approach is taken for terrorist content, with a list maintained by the Global Internet Forum to Counter Terrorism.
“AI definitely changes the picture both for the content sent and the ways to detect it. I’m working on methods that will not compromise the security or privacy of E2EE while being realistic about the scanning capabilities of modern technology.”
While Scheffler’s research focuses on technological solutions to improve content moderation both in E2EE and otherwise, she also emphasizes the need for clear public policy to prevent the slippery slope of content scanning. She suggests that not all scanning needs to cause a “report” to leave a device, and that policymakers should limit content scanning to specific purposes, such as child safety and counterterrorism.
“E2EE isn’t going away anytime soon, and content moderation isn’t going away anytime soon, either,” Scheffler said. “A lot of my work is trying to figure out, ‘given the technology we have now, and that we’ll have soon, how can we reconcile all the different goals we have?’”