CyLab researchers present their work at SOUPS 2025
Michael Cunningham
Aug 4, 2025
Carnegie Mellon faculty and students will share their research at next week’s 2025 Symposium on Usable Privacy and Security (SOUPS), which takes place August 10-12 in Seattle.
Founded by CyLab Director Lorrie Cranor and first hosted by CMU in 2005, the event continues to bring together interdisciplinary groups of researchers who are focused on solving challenges in areas of security, privacy, and human-computer interaction.
Here, we’ve compiled a list of technical papers co-authored by CyLab Security and Privacy Institute members that will be presented at the event. CyLab researchers are also presenting a number of posters and workshop papers at SOUPS.
Adopting AI to Protect Industrial Control Systems: Assessing Challenges and Opportunities from the Operators’ Perspective
Authors: Clement Fung, Eric Zeng, and Lujo Bauer, Carnegie Mellon University
Abstract: Industrial control systems (ICS) manage critical physical processes such as electric distribution and water treatment. Attackers infiltrate ICS and manipulate these critical processes, causing damage and harm. AI-based approaches can detect such attacks and raise alarms for operators, but they are not commonly used in practice and it is unclear why. In this work, we directly asked practitioners about current practices for alarms in ICS and their perspectives on adopting AI to support these practices. We conducted 18 semi-structured interviews with practitioners who work on protecting ICS, through which we identified tasks commonly performed for alarms such as raising alarms when anomalies are detected, coordinating operator response to alarms, and analyzing data to improve alarm rule sets. We found that practitioners often struggle with tasks beyond anomaly detection, such as alarm diagnosis, and we propose designing AI-based tools to support these tasks. We also identified barriers to adopting AI in ICS (e.g., limited data collection, low trust in vendor technology) and recommend ways to make AI-based tools more effective and trusted by practitioners, such as demonstrating model transparency through interactive pilot projects.
Design and Evaluation of Privacy-Preserving Protocols for Agent-Facilitated Mobile Money Services in Kenya
Authors: Karen Sowon, Indiana University; Collins W. Munyendo, The George Washington University; Lily Klucinec, Carnegie Mellon University; Eunice Maingi and Gerald Suleh, Strathmore University; Lorrie Faith Cranor and Giulia Fanti, Carnegie Mellon University; Conrad Tucker and Assane Gueye, Carnegie Mellon University-Africa
Abstract: Mobile Money (MoMo), a technology that allows users to complete financial transactions using a mobile phone without requiring a bank account, is a common method for processing financial transactions in Africa and other developing regions. Users can deposit and withdraw money with the help of human agents. During deposit and withdraw operations, know-your-customer (KYC) processes require agents to access and verify customer information such as name and ID number, which can introduce privacy and security risks. In this work, we design alternative protocols for MoMo deposits/withdrawals that protect users' privacy while enabling KYC checks by redirecting the flow of sensitive information from the agent to the MoMo provider. We evaluate the usability and efficiency of our proposed protocols in a role-play and semi-structured interview study with 32 users and 15 agents in Kenya. We find that users and agents prefer the new protocols, due in part to convenient and efficient verification using biometrics as well as better data privacy and access control. However, our study also surfaced challenges that need to be addressed before these protocols can be deployed.
Misuse, Misreporting, Misinterpretation of Statistical Methods in Usable Privacy and Security Papers
Authors: Jenny Tang, Lujo Bauer, and Nicolas Christin, Carnegie Mellon University
Abstract: Null hypothesis significance testing (NHST) is commonly used in quantitative usable privacy and security studies. Many papers use results from statistical tests to assert whether effects or differences exist depending on the resulting p-value. We conduct a systematic review of papers published in 10 editions of the Symposium on Usable Privacy and Security over a span of 20 years to evaluate the field’s use of NHST. We code statistical tests for potential statistical validity, reporting, or interpretation issues that may undermine assertions made in the 121 papers that use NHST. Most problematically, tests in 23% of papers inadequately account for non-independence between samples, leading to potentially invalid claims. 58% of papers lack information to verify whether an assertion is supported, such as imprecisely specifying the statistical test conducted. Many papers contain more minor statistical issues or report statistics in ways that deviate from best practice. We conclude with recommendations for statistical reporting and statistical thinking in the field.
Replication: "No one can hack my mind" - 10 years later: An update and outlook on experts' and non-experts' security practices and advice
Authors: Anna-Marie Ortloff, University of Bonn; Jenny Tang, Carnegie Mellon University; Arthi Arumugam, Daniel Huschina, Lisa Geierhaas, and Florin Martius, University of Bonn; Luisa Jansen, University of Bern; Kolja von der Twer and Lilly Jungbluth, University of Bonn; Matthew Smith, University of Bonn, Fraunhofer FKIE, Code Intelligence
Abstract: In 2015, Ion, Reeder, and Consolvo studied IT security advice and self-reported security behavior of experts and non-experts. In 2019, Busse et al. replicated this study and found only minor changes in expert advice and non-expert behavior, with persisting differences between the two groups. Now, 10 years later, we replicated the study with an updated survey and compared our results to both prior studies. Additionally, we interviewed security experts and asked them for their views on the past and future of IT security advice. We report the current state of security behavior and advice based on two survey samples: one non-expert (N=990), and one expert sample (N=75) and an additional expert interview sample (N=35). We identified notable changes in reported security behavior for both experts and non-experts, including that experts and non-experts are beginning to adopt new security practices in authentication. The expert interviews show a path forward, with experts hoping for more improvements to usability and targeted advice for specific user and device contexts.