CyLab faculty, students to present at ACM CCS 2025
Michael Cunningham
Oct 9, 2025
Carnegie Mellon faculty and students will present on a wide range of topics at the Association for Computing Machinery Special Interest Group on Security, Audit and Control’s (SIGSAC’s) Conference on Computer and Communications Security (ACM CCS). Held at the Taipei International Convention Center in Taipei, Taiwan on October 13-17, the event brings together information security researchers, practitioners, developers, and users from all over the world to explore cutting-edge ideas and results.
Here, we’ve compiled a list of the papers co-authored by CyLab Security and Privacy Institute members that are being presented at the event.
Exploiting the Shared Storage API
Authors: Alexandra Nisenoff, Carnegie Mellon University; Deian Stefan, UC San Diego; Nicolas Christin, Carnegie Mellon University
Abstract
As part of an effort to replace third-party cookies, Google introduced the Shared Storage API as one of their "Privacy Sandbox" proposals. The Shared Storage API seeks to replace some of the benign functionalities that third-party cookies facilitate while mitigating the potential privacy harms that they can cause, such as reidentifying users across websites. Shared Storage seeks to do this by allowing third parties to store data that is not partitioned by top-level website, but limiting read access to those data.
We find that the implementation and design of the API have flaws that allow for both the reidentification of users across sites and the leakage of more data than intended by Google. With the API being deployed in Google Chrome and major advertisers and trackers having completed the processes required to gain access to the API, the Shared Storage API may not do as much as intended to improve the state of privacy on the web. We present several attacks on the API that circumvent the key goals laid out by Google as well as discuss potential extensions and mitigation strategies. While we have responsibly disclosed our attacks to Google, most attacks remain possible in Chrome.
Is this a scam??": The nature and quality of Reddit discussion about scams
Authors: Elijah Bouma-Sims, Mandy Lanyon, and Lorrie Faith Cranor; Carnegie Mellon University
Abstract
People often use social media platforms to seek advice about scams like ecommerce fraud or phishing; however, little research has investigated the nature of such discussion. We conducted a multi-stage thematic analysis of 1,525 posts made to four communities focused on scam discussion on Reddit, primarily from /r/Scams. We found that posters use Reddit to identify scams, discuss the strategies employed by scammers, and obtain advice on coping with victimization. The scams discussed are primarily mediated by the internet or related technologies. Users in the communities we studied especially provide informational support and reassurance to victims, although some comments reinforce victim-blaming attitudes. We also observed qualitative differences in the types of support sought and given based on the community, with the board /r/Sextortion especially being used for emotional support. We conclude that Reddit’s scam discussion communities serve as a valuable resource for scam prevention and remediation. Additionally, we discuss the potential for future research and law enforcement engagement on Reddit.
One-Sided Bounded Noise: Theory, Optimization Algorithms and Applications
Authors: Hanshen Xiao, NVIDIA/Purdue University; Jun Wan, Five Rings; Elaine Shi, Carnegie Mellon University; Srinivas Devadas, MIT
Abstract
We investigate the optimal trade-off between utility and privacy using one-sided perturbation. Unlike conventional privacy-preserving statistical releases, randomization for obfuscating side-channel information is often constrained by infrastructure limitations. In practical scenarios, these constraints may only allow positive and bounded perturbations. For example, extending processing time or sending and storing dummy messages/data is typically feasible. However, implementing modifications in the opposite direction is challenging due to restrictions imposed by hardware capacity, communication protocols, and data management systems. In this paper, we establish the foundation of the positive noise mechanism
within three semantic privacy frameworks: Differential Privacy (DP), Maximal Leakage (MaxL), and Probably Approximately Correct (PAC) Privacy. We then present a series of results that characterize or approximate the optimal one-sided noise distribution, subject to a second-moment budget and a bounded maximal magnitude. Building on this theoretical foundation, we develop efficient tools to solve the underlying optimization problems. Through experiments conducted in various scenarios, we demonstrate that existing techniques, such as Truncated Biased Laplace noise, are often suboptimal and result in excessive performance degradation. For instance, in an anonymous communication system with a 250𝐾 message budget, our optimized DP noise mechanism achieves a 21× reduction in dummy messages and an 18× reduction in dummy message latency overhead compared to traditional methods.
Pixnapping: Bringing Pixel Stealing out of the Stone Age
Authors: Alan Wang, University of Illinois Urbana-Champaign; Pranav Gopalkrishnan, University of Washington; Yingchen Wang, University of California, Berkeley; Christopher Fletcher, University of California, Berkeley; Hovav Shacham, UT Austin; David Kohlbrenner, University of Washington; Riccardo Paccagnella, Carnegie Mellon University
Abstract
Pixel stealing attacks enable malicious websites to leak sensitive content displayed in victim websites. The idea, introduced by Stone in 2013, is to embed victim websites in iframes and use SVG filters to compute on, and create side channels as a function of, those websites’ pixels. Fortunately, despite the danger, pixel stealing attacks are all but mitigated today thanks to websites and web browsers heavily restricting iframes and cross-origin cookie sharing.
This paper introduces a pixel stealing framework targeting Android devices that bypasses all browser mitigations and can even steal secrets from non-browser apps. Our key observation is that Android APIs enable an attacker to create an analog to Stone-style attacks outside of the browser. Specifically, a malicious app can force victim pixels into the rendering pipeline via Android intents and compute on those victim pixels using a stack of semi-transparent Android activities. Crucially, our framework enables stealing secrets only stored locally (e.g., 2FA codes and Google Maps Timeline), which have never before been in reach of pixel stealing attacks.
We instantiate our pixel stealing framework on Google and Samsung phones—which differ in both hardware and graphical software. On the Google phones, we additionally provide evidence that the pixel color-dependent timing measured in our attack is due to GPU graphical data compression. We demonstrate end-to-end attacks that steal pixels from both browser and non-browser victims, including Google Accounts, Gmail, Perplexity AI, Signal, Venmo, Google Messages, and Google Maps. Finally, we demonstrate an end-to-end attack capable of stealthily stealing security-critical and ephemeral 2FA codes from Google Authenticator in under 30 seconds.