CyLab faculty, students to present at the 34th USENIX Security Symposium

Michael Cunningham

Aug 7, 2025

USENIX logo

Carnegie Mellon faculty and students will present on a wide range of topics at the 34th USENIX Security Symposium. Held in Seattle on August 13-15, the event brings together experts from around the world, who will highlight the latest advances in the security and privacy of computer systems and networks.

In addition to the papers that will be presented at the event, CyLab Security and Privacy Institute faculty members Lujo Bauer and Giulia Fanti are serving as key organizers of USENIX Security '25. Bauer, professor of Electrical and Computer Engineering and Software and Societal Systems, is program co-chair of the event, and Fanti, Angel Jordan associate professor of Electrical and Computer Engineering and co-director of CyLab-Africa, is program vice co-chair.

Here, we’ve compiled a list of the 13 papers co-authored by CyLab members that are being presented at the event.

As Advertised? Understanding the Impact of Influencer VPN Ads

Authors: Omer Akgul, University of Maryland/Carnegie Mellon University; Richard Roberts, Emma Shroyer, Dave Levin, and Michelle L. Mazurek, University of Maryland

Abstract: Influencer VPN ads (sponsored segments) on YouTube often disseminate misleading information about both VPNs, and security & privacy more broadly. However, it remains unclear how (or whether) these ads affect users' perceptions and knowledge about VPNs. In this work, we explore the relationship between YouTube VPN ad exposure and users' mental models of VPNs, security, and privacy. We use a novel VPN ad detection model to calculate the ad exposure of 217 participants via their YouTube watch histories, and we develop scales to characterize their mental models in relation to claims commonly made in VPN ads. Through (pre-registered) regression-based analysis, we find that exposure to VPN ads is significantly correlated with familiarity with VPN brands and increased belief in (hyperbolic) threats. While not specific to VPNs, these threats are often discussed in VPN ads. In contrast, although many participants agree with both factual and misleading mental models of VPNs that often appear in ads, we find no significant correlation between exposure to VPN ads and these mental models. These findings suggest that, if VPN ads do impact mental models, then it is predominantly emotional (i.e., threat perceptions) rather than technical.

Flexway O-Sort: Enclave-Friendly and Optimal Oblivious Sorting

Authors: Tianyao Gu, Carnegie Mellon University and Oblivious Labs Inc.; Yilei Wang, Alibaba Cloud; Afonso Tinoco, Carnegie Mellon University and Oblivious Labs Inc.; Bingnan Chen and Ke Yi, HKUST; Elaine Shi, Carnegie Mellon University and Oblivious Labs Inc.

Abstract: Oblivious algorithms are being deployed at large scale in real world to enable privacy-preserving applications such as Signal's private contact discovery. Oblivious sorting is a fundamental building block in the design of oblivious algorithms for numerous computation tasks. Unfortunately, there is still a theory-practice gap for oblivious sort. The commonly implemented bitonic sorting algorithm is not asymptotically optimal, whereas known asymptotically optimal algorithms suffer from large constants.

In this paper, we construct a new oblivious sorting algorithm called flexway o-sort, which is asymptotically optimal, concretely efficient, and suitable for implementation in hardware enclaves such as Intel SGX. For moderately large inputs of 12 GB, our flexway o-sort algorithm outperforms known oblivious sorting algorithms by 1.32x to $28.8x when the data fits within the hardware enclave, and by 4.1x to 208x when the data does not fit within the hardware enclave. We also implemented various applications of oblivious sorting, including histogram, database join, and initialization of an ORAM data structure. For these applications and data sets from 8GB to 32GB, we achieve 1.44 ∼ 2.3x speedup over bitonic sort when the data fits within the enclave, and 4.9 ∼ 5.5x speedup when the data does not fit within the enclave.

Auspex: Unveiling Inconsistency Bugs of Transaction Fee Mechanism in Blockchain

Authors: Zheyuan He, University of Electronic Science and Technology of China; Zihao Li, The Hong Kong Polytechnic University; Jiahao Luo, University of Electronic Science and Technology of China; Feng Luo, The Hong Kong Polytechnic University; Junhan Duan, Carnegie Mellon University; Jingwei Li and Shuwei Song, University of Electronic Science and Technology of China; Xiapu Luo, The Hong Kong Polytechnic University; Ting Chen and Xiaosong Zhang, University of Electronic Science and Technology of China

Abstract: The transaction fee mechanism (TFM) in blockchain prevents resource abuse by charging users based on resource usage, but inconsistencies between charged fees and actual resource consumption, termed as TFM inconsistency bugs, introduce significant security and financial risks.

In this paper, we present Auspex, the first tool that automatically detects TFM inconsistency bugs in Ethereum ecosystem by leveraging fuzzing technology. To efficiently trigger and identify TFM inconsistency bugs, Auspex introduces three novel technologies: (i) a chain-based test case generation strategy that enables Auspex to efficiently generate the test cases; (ii) a charging-guided fuzzing approach that guides Auspex to explore more code logic; and (iii) fee consistency property and resource consistency property, two general bug oracles for automatically detecting bugs. We evaluate Auspex on Ethereum and demonstrate its effectiveness by discovering 13 previously unknown TFM inconsistency bugs, and achieving 3.5 times more code branches than state-of-the-art tools. We further explore the financial and security impact of the bugs. On one hand, these bugs have caused losses exceeding millions of dollars for users on both Ethereum and BSC. On the other hand, the denial-of-service (DoS) attack exploiting these bugs can prolong transaction wait time by 4.5 times during the attack period.

Blockchain Address Poisoning

Authors: Taro Tsuchiya and Jin-Dong Dong, Carnegie Mellon University; Kyle Soska, Independent; Nicolas Christin, Carnegie Mellon University

Abstract: In many blockchains, e.g., Ethereum, Binance Smart Chain (BSC), the primary representation used for wallet addresses is a hardly memorable 40-digit hexadecimal string. As a result, users often select addresses from their recent transaction history, which enables blockchain address poisoning. The adversary first generates lookalike addresses similar to one with which the victim has previously interacted, and then engages with the victim to "poison" their transaction history. The goal is to have the victim mistakenly send tokens to the lookalike address, as opposed to the intended recipient. Compared to contemporary studies, this paper provides four notable contributions. First, we develop a detection system and perform measurements over two years on both Ethereum and BSC. We identify 13~times more attack attempts than reported previously—totaling 270M on-chain attacks targeting 17M victims. 6,633 incidents have caused at least 83.8M USD in losses, which makes blockchain address poisoning one of the largest cryptocurrency phishing schemes observed in the wild. Second, we analyze a few large attack entities using improved clustering techniques, and model attacker profitability and competition. Third, we reveal attack strategies—targeted populations, success conditions (address similarity, timing), and cross-chain attacks. Fourth, we mathematically define and simulate the lookalike address generation process across various software- and hardware-based implementations, and identify a large-scale attacker group that appears to use GPUs. We also discuss defensive countermeasures.

FABLE: Batched Evaluation on Confidential Lookup Tables in 2PC

Authors: Zhengyuan Su, Tsinghua University; Qi Pang, Simon Beyzerov, and Wenting Zheng, Carnegie Mellon University

Abstract: Secure two-party computation (2PC) is a cryptographic technique that enables two mutually distrusting parties to jointly evaluate a function over their private inputs. We consider a 2PC primitive called confidential lookup table (LUT) evaluation, which is useful in privacy-preserving ML inference and data analytics. In this setting, a server holds a confidential LUT and evaluates it over an input secret shared between a client and the server, producing a secret-shared output. Existing approaches suffer from high asymptotic complexity and practical inefficiency, with some designs lacking confidentiality guarantees for the LUT. Recognizing that many applications of confidential LUT evaluation require processing multiple inputs with the same LUT, we propose FABLE, a system designed to efficiently evaluate a LUT on a large batch of queries simultaneously. Compared to the state-of-the-art confidential LUT evaluation methods, FABLE achieves up to 28-101× speedup in LAN environments and up to 50-393× speedup in WAN environments.

PolySys: an Algebraic Leakage Attack Engine

Authors: Zachary Espiritu, MongoDB Research; Seny Kamara, Brown University and MongoDB Research; Tarik Moataz, MongoDB Research; Andrew Park, Carnegie Mellon University and MongoDB Research

Abstract: In this work, we propose a novel framework called PolySys for modeling and designing leakage attacks as constraint-solving algorithms over polynomial systems. PolySys formalizes the design of attacks using invertible encodings, structural and leakage equations, and efficient constraint-solving algorithms including SAT and constraint solvers. It is capable of modeling resolution, known-data, and inference attacks for common leakage patterns.

To demonstrate the practicality of our framework, we implement a PolySys attack engine in Python and apply it to state-of-the-art query recovery, data resolution, and query inference attacks on point and range multi-maps. Our results show that PolySys outperforms all existing attacks under identical assumptions, achieving up to 60× higher recovery rates in some scenarios. While scalability remains a challenge for larger datasets, PolySys represents a promising step toward a general-purpose framework for designing leakage attacks. We believe future work can further enhance its efficiency to scale to larger and more complex workloads.

Towards Practical, End-to-End Formally Verified X.509 Certificate Validators with Verdict

Authors: Zhengyao Lin, Michael McLoughlin, and Pratap Singh, Carnegie Mellon University; Rory Brennan-Jones, University of Rochester; Paul Hitchcox, Carnegie Mellon University; Joshua Gancher, Northeastern University; Bryan Parno, Carnegie Mellon University

Abstract: Validating X.509 certificates is a critical part of Internet security, but the relevant standards are complex and ambiguous. Most X.509 validators also intentionally deviate from these standards in idiosyncratic ways, often for security or backward compatibility. Unsurprisingly, the result is a long history of security vulnerabilities.

In this work, we present Verdict, the first end-to-end formally verified X.509 certificate validator with customizable validation policies. Verdict's formal guarantees cover certificate parsing, path building, and path validation. To make Verdict practical to both verify and to use, we specify its correctness generically in terms of a user-supplied validation policy written concisely in first-order logic, with a proof-producing compiler to efficient Rust code.

To demonstrate Verdict's expressiveness, we use Verdict's policy framework to implement the X.509 validation policies in Google Chrome, Mozilla Firefox, and OpenSSL, and formally prove that they conform to a subset of RFC requirements. We instantiate Verdict with each policy and show that Verdict matches the corresponding baseline's behavior and state-of-the-art performance on over ten million certificates from Certificate Transparency logs.

OwlC: Compiling Security Protocols to Verified, Secure, High-Performance Libraries

Authors: Pratap Singh, Carnegie Mellon University; Joshua Gancher, Northeastern University; Bryan Parno, Carnegie Mellon University

Abstract: Cryptographic security protocols, such as TLS or WireGuard, form the foundation of a secure Internet; hence, a long line of research has shown how to formally verify their high-level designs. Unfortunately, these formal guarantees have not yet reached real-world implementations of these protocols, which still rely on testing and ad-hoc manual audits for security and correctness. This gap may be explained, in part, by the substantial performance and/or development overhead imposed by prior efforts to verify implementations.

To make it more practical to deploy verified implementations of security protocols, we present OwlC, the first fully automated, security-preserving compiler for verified, high-performance implementations of security protocols. From a high-level protocol specification proven computationally secure in the Owl language, OwlC emits an efficient, interoperable, side-channel resistant Rust library that is automatically formally verified to be correct.

We produce verified libraries for all previously written Owl protocols, and we also evaluate OwlC on two new verified case studies: WireGuard and Hybrid Public-Key Encryption (HPKE). Our verified implementations interoperate with existing implementations, and their performance matches unverified industrial baselines on end-to-end benchmarks.

Vulnerability of Text-Matching in ML/AI Conference Reviewer Assignments to Collusions

Authors: Jhih-Yi (Janet) Hsieh, Aditi Raghunathan, and Nihar B. Shah, Carnegie Mellon University

Abstract: In the peer review process of top-tier machine learning (ML) and artificial intelligence (AI) conferences, reviewers are assigned to papers through automated methods. These assignment algorithms consider two main factors: (1) reviewers' expressed interests indicated by their bids for papers, and (2) reviewers' domain expertise inferred from the similarity between the text of their previously published papers and the submitted manuscripts. A significant challenge these conferences face is the existence of collusion rings, where groups of researchers manipulate the assignment process to review each other's papers, providing positive evaluations regardless of their actual quality. Most efforts to combat collusion rings have focused on preventing bid manipulation, under the assumption that the text similarity component is secure. In this paper, we demonstrate that even in the absence of bidding, colluding reviewers and authors can exploit the machine learning based text-matching component of reviewer assignment used at top ML/AI venues to get assigned their target paper. We also highlight specific vulnerabilities within this system and offer suggestions to enhance its robustness.

Privacy Solution or Menace? Investigating Perceptions of Radio-Frequency Sensing

Authors: Maximiliane Windl, LMU Munich/Munich Center for Machine Learning (MCML); Omer Akgul, Carnegie Mellon University/RSAC Labs; Nathan Malkin, New Jersey Institute of Technology; Lorrie Faith Cranor, Carnegie Mellon University

Abstract: Radio-frequency sensors are often introduced as privacy-preserving alternatives to cameras, as they enable similar use cases without relying on visual data. However, researchers argue that radio-frequency sensors cause privacy risks similar to cameras and even introduce additional risks. We conducted in-depth interviews (N=14) and a large-scale vignette survey (N=510) to understand people's perceptions and privacy concerns around radio-frequency sensing. Most interviewees were initially unaware of the full capabilities of radio-frequency sensing but expressed nuanced concerns upon learning more. Our survey revealed that, while people expressed concerns, they mostly preferred radio-frequency sensors over cameras in private locations. However, they preferred cameras when considering radio-frequency sensing from a neighbor's perspective and in security-relevant situations. Protective measures can reduce concerns, but the best protection depends on the context. Our findings can inform educational and legislative efforts to ensure a privacy-preserving future with radio-frequency technology.

Navigating Security and Privacy Threats in Homeless Service Provision

Authors: Yuxi Wu, Northeastern University; Ruoxi Zhang, Shiyue Liu, Mufei He, and Aidan Hong, Carnegie Mellon University; Jeremy J. Northup, Point Park University; Calla Kainaroi, Bridge to the Mountains; Fei Fang and Hong Shen, Carnegie Mellon University

Abstract: People experiencing homelessness interact with service providers to access essential services. As clients, homeless individuals are expected to reveal sensitive information about themselves to service providers, while personal security and privacy (S&P) preferences fall by the wayside. Simultaneously, providers take on S&P-adjacent responsibilities: helping clients fill out applications, safekeeping clients' personal documents, monitoring clients' online safety, undergoing workplace S&P training, etc. We created five storyboards to represent S&P challenges that clients can face when they interact with providers. In interviews with homeless individuals and service providers in the Northeastern United States, we use these storyboards to explore the S&P challenges in client-provider relationships within homeless services. We find a set of mismatches in S&P priorities between clients and providers, leading to mistrust between the two parties. We provide design recommendations, envisioned by both parties, for providers to bridge these mismatches.

TRex: Practical Type Reconstruction for Binary Code

Authors: Jay Bosamiya, Microsoft Research; Maverick Woo and Bryan Parno, Carnegie Mellon University

Abstract: A lack of high-quality source-level types plagues decompiled code despite decades of advancement in the science and art of decompilation. Accurate type information is crucial in understanding program behavior, and existing decompilers rely heavily on manual input from human experts to improve decompiled output. We propose TRex, a tool that performs automated deductive type reconstruction, using a new perspective that accounts for the inherent impossibility of recovering lost source types. Compared with Ghidra, a state-of-the-art decompiler used by practitioners, TRex shows a noticeable improvement in the quality of output types on 123 of 125 binaries. By shifting focus away from recovering lost source types and towards constructing accurate behavior-capturing types, TRex broadens the possibilities for simpler and more elegant decompilation tools, and ultimately reduces the manual effort needed to analyze and understand binary code.

Vest: Verified, Secure, High-Performance Parsing and Serialization for Rust

Authors: Yi Cai, University of Maryland, College Park; Pratap Singh and Zhengyao Lin, Carnegie Mellon University; Jay Bosamiya, Microsoft Research; Joshua Gancher, Northeastern University; Milijana Surbatovich, University of Maryland, College Park; Bryan Parno, Carnegie Mellon University

Abstract: Many software vulnerabilities lurk in parsers and serializers, due to their need to be both high-performance and conformant with complex binary formats. To categorically eliminate these vulnerabilities, prior efforts have sought to deliver provable guarantees for parsing and serialization. Unfortunately, security, performance, and usability issues with these efforts mean that unverified parsers and serializers remain the status quo.

Hence, we present Vest, the first framework for high-performance, formally verified binary parsers and serializers that combines expressivity and ease of use with state-of-the-art correctness and security guarantees, including—for the first time—resistance to basic digital side-channel attacks. Most developers interact with Vest by defining their binary format in an expressive, RFC-like DSL. Vest then generates and automatically verifies high-performance parser and serializer implementations in Rust. This process relies on an extensible library of verified parser/serializer combinators we have developed, and that expert developers can use directly.

We evaluate Vest via three case studies: the Bitcoin block format, TLS 1.3 handshake messages, and the WebAssembly binary format. We show that Vest has executable performance on-par (or better) than hand-written, unverified parsers and serializers, and has orders of magnitude better verification performance relative to comparable prior work.