CyLab researchers present at ACM CHI 2025

Michael Cunningham

Mar 14, 2025

ACM CHI 2025  logo graphic

CyLab Security and Privacy Institute researchers are set to present nine papers and teach one course at the upcoming Association for Computing Machinery (ACM) Conference on Human Factors in Computing Systems (CHI 2025).

The conference will take place in Yokohama, Japan from April 26th through May 1st, bringing together researchers, practitioners, and industry leaders to share their latest work and ideas and to foster collaboration and innovation in advancing digital technologies.

Below, we’ve compiled a list of papers authored by CyLab Security and Privacy Institute members that are being presented at this year’s event, as well as an introductory course to robotics and AI prototyping with Raspberry Pi.

Can AI Model the Complexities of Human Moral Decision-making? A Qualitative Study of Kidney Allocation Decisions

Authors: Vijay Keswani, Duke University; Vincent Conitzer, Carnegie Mellon University; Walter Sinnott-Armstrong, Duke University; Breanna K. Nguyen, Yale University; Hoda Heidari, Carnegie Mellon University; Jana Schaich Borg, Duke University

Abstract: A growing body of work in Ethical AI attempts to capture human moral judgments through simple computational models. The key question we address in this work is whether such simple AI models capture the critical nuances of moral decision-making by focusing on the use case of kidney allocation. We conducted twenty interviews where participants explained their rationale for their judgments about who should receive a kidney. We observe participants: (a) value patients' morally-relevant attributes to different degrees; (b) use diverse decision-making processes, citing heuristics to reduce decision complexity; (c) can change their opinions; (d) sometimes lack confidence in their decisions (e.g., due to incomplete information); and (e) express enthusiasm and concern regarding AI assisting humans in kidney allocation decisions. Based on these findings, we discuss challenges of computationally modeling moral judgments as a stand-in for human input, highlight drawbacks of current approaches, and suggest future directions to address these issues.

Encouraging Users to Change Breached Passwords Using the Protection Motivation Theory

Authors: Yixin Zou, Max Planck Institute for Security and Privacy; Khue Le, University of Michigan; Peter Mayer, University of Southern Denmark; Alessandro Acquisti, Carnegie Mellon University; Adam J. Aviv, The George Washington University; Florian Schaub, University of Michigan

Abstract: We draw on the Protection Motivation Theory (PMT) to design interventions that encourage users to change breached passwords. Our online experiment (𝑛 = 1,386) compared the effectiveness of a threat appeal (highlighting the negative consequences after passwords were breached) and a coping appeal (providing instructions on changing the breached password) in a 2 × 2 factorial design. Compared to the control condition, participants receiving the threat appeal were more likely to intend to change their passwords, and participants receiving both appeals were more likely to end up changing their passwords. Participants’ password change behaviors are further associated with other factors, such as their security attitudes (SA-6) and time passed since the breach, suggesting that PMT-based interventions are useful but insufficient to fully motivate users to change their passwords. Our study contributes to PMT’s application in security research and provides concrete design implications for improving compromised credential notifications.

Purpose Mode: Reducing Distraction Through Toggling Attention Capture Damaging Patterns on Social Media Websites

Authors: Hao-Ping (Hank) Lee, Carnegie Mellon University; Yi-Shyuan Chiang, University of Illinois Urbana-Champaign; Lan Gao, University of Chicago; Stephanie Yang, Carnegie Mellon University; Philipp Winter, Amnesic Systems; Sauvik Das, Carnegie Mellon University

Abstract: Social media websites thrive on user engagement by employing Attention Capture Damaging Patterns (ACDPs), e.g., infinite scroll, that prey on cognitive vulnerabilities to distract users. Prior work has taxonomized these ACDPs, but we have yet to measure how the presence of ACDPs impacts perceived distraction nor how mechanisms that suppress ACDPs reduce distraction. We conducted a two-week, mixed-methods field study with 29 participants to model how people get distracted when browsing social media websites, and how ACDPs might play a role. In the first week of the study, we sample participants' in-situ perceptions of distraction, subjective perceptions of the browsing session (e.g., satisfaction), and the presence/absence of ACDPs. Participants reported feeling distracted 28% of the time, and that subjective perceptions and some ACDPs (e.g., notifications) highly correlated with when they felt distracted. In the second week of the study, participants were given access to Purpose Mode — a browser extension that allows users to "toggle off" ACDPs. Participants reported feeling distracted only 7% of the time and spent 21 fewer daily minutes browsing these websites. We discovered that Purpose Mode empowered users to feel more in control over their social media browsing and made participants feel less irritated and frustrated.

Of Secrets and Seedphrases: Conceptual misunderstandings and security challenges for seed phrase management among cryptocurrency users

Authors: Farida Eleshin, Qi Sun, Mengzhe Ye, Sauvik Das, and Jason Hong; Carnegie Mellon University

Abstract: Cryptocurrency adoption has surged dramatically, with over 500 million global users. Despite the appeal of self-custodial wallets, which grant users control over their assets, these users often struggle with the complexities of securing seed phrases, leading to substantial financial losses. This paper investigates the behaviors, challenges, and security practices of cryptocurrency users regarding seed phrase management. We conducted a mixed-methods study comprising semi-structured interviews with 20 participants and a comprehensive survey of 643 respondents. Our findings reveal significant gaps in users' understanding and practices around seed phrase security and the circumstances under which users share their seed phrases. We also explore users' mental models of shared accounts and strategies for handling cryptocurrency assets in the event of death. We found that the majority of our participants harbored significant misconceptions about seed phrases that could expose them to significant security risks --- e.g., only 43% could correctly identify an image of a seed phrase, many believed they could reset their seed phrase if they lost them. Moreover, only a minority have engaged in any estate planning for their crypto assets. By identifying these challenges and behaviors, we provide actionable insights for the design of more secure and user-friendly cryptocurrency wallets, ultimately aiming to enhance user confidence in managing their crypto assets reduce their exposure to scams and accidental loss of assets, and simplify the creation of bequeathment plans.

Measuring Risks to Users' Health Privacy Posed by Third-Party Web Tracking and Targeted Advertising

Authors: Eric W. Zeng and Xiaoyuan Wu, Carnegie Mellon University; Emily N. Ertmann, Lily Huang, Danielle F. Johnson, Anusha T. Mehendale, Brandon T. Tang, Karolina Zhukoff, and Michael Adjei-Poku, University of Pennsylvania; Lujo Bauer, Carnegie Mellon University; Ari Friedman and Matthew McCoy, University of Pennsylvania

Abstract: Online advertising platforms may be able to infer privacy-sensitive information about people, such as their health conditions. This could lead to harms like exposure to predatory targeted advertising or unwanted disclosure of health conditions to employers or insurers. In this work, we experimentally evaluate whether online advertisers target people with health conditions. We collected the browsing histories of people with and without health conditions. We crawled their histories to simulate their browsing profiles and collected the ads that were served to them. Then, we compared the content of the ads between groups. We observed that the profiles of people who visited more health-related web pages received more health-related ads. 49.5% of health-related ads used deceptive advertising techniques. Our findings suggest that new privacy regulations and enforcement measures are needed to protect people's health privacy from online tracking and advertising platforms.

LLM Whisperer: An Inconspicuous Attack to Bias LLM Responses

Authors: Weiran Lin, Carnegie Mellon University; Anna Gerchanovsky, Duke University; Omer Akgul, Carnegie Mellon University; Lujo Bauer, Carnegie Mellon University; Matt Fredrikson, Carnegie Mellon University; Zifan Wang, Scale AI

Abstract: Writing effective prompts for large language models (LLM) can be unintuitive and burdensome. In response, services that optimize or suggest prompts have emerged. While such services can reduce user effort, they also introduce a risk: the prompt provider can subtly manipulate prompts to produce heavily biased LLM responses. In this work, we show that subtle synonym replacements in prompts can increase the likelihood (by a difference up to 78%) that LLMs mention a target concept (e.g., a brand, political party, nation). We substantiate our observations through a user study, showing that our adversarially perturbed prompts 1) are indistinguishable from unaltered prompts by humans, 2) push LLMs to recommend target concepts more often, and 3) make users more likely to notice target concepts, all without arousing suspicion. The practicality of this attack has the potential to undermine user autonomy. Among other measures, we recommend implementing warnings against using prompts from untrusted parties.

Vipera: Towards systematic auditing of generative text-to-image models at scale

Authors: Yanwei Huang, Wesley Hanwen Deng, Sijia Xiao, Motahhare Eslami, Jason Hong, and Adam Perer; Carnegie Mellon University

Abstract: Generative text-to-image (T2I) models are known for their risks related such as bias, offense, and misinformation. Current AI auditing methods face challenges in scalability and thoroughness, and it is even more challenging to enable auditors to explore the auditing space in a structural and effective way. Vipera employs multiple visual cues including a scene graph to facilitate image collection sensemaking and inspire auditors to explore and hierarchically organize the auditing criteria. Additionally, it leverages LLM-powered suggestions to facilitate exploration of unexplored auditing directions. An observational user study demonstrates Vipera's effectiveness in helping auditors organize their analyses while engaging with diverse criteria.

Letters from Future Self: Augmenting the Letter-Exchange Exercise with LLM-based Agents to Enhance Young Adults' Career Exploration

Authors: Hayeon Jeon, Suhwoo Yoon, Keyeun Lee, Seo Hyeong Kim, Esther Hehsun Kim, Seonghye Cho, Yena Ko, and Soeun Yang, Seoul National University; Laura Dabbish and John Zimmerman, Carnegie Mellon University; Eun-mee Kim and Hajin Lim; Seoul National University

Abstract: Young adults often encounter challenges in career exploration. Self-guided interventions, such as the letter-exchange exercise, where participants envision and adopt the perspective of their future selves by exchanging letters with their envisioned future selves, can support career development. However, the broader adoption of such interventions may be limited without structured guidance. To address this, we integrated Large Language Model (LLM)-based agents that simulate participants’ future selves into the letter-exchange exercise and evaluated their effectiveness. A one-week experiment (N=36) compared three conditions: (1) participants manually writing replies to themselves from the perspective of their future selves (baseline), (2) future-self agents generating letters to participants, and (3) future-self agents engaging in chat conversations with participants. Results indicated that exchanging letters with future-self agents enhanced participants' engagement during the exercise, while overall benefits of the intervention on future orientation, career self-concept, and psychological support remained comparable across conditions. We discuss design implications for AI-augmented interventions for supporting young adults' career exploration.

Simulacrum of stories: Examining Large Language Models as Qualitative Research Participants

Authors: Shivani Kapania, William Agnew, Motahhare Eslami, Hoda Heidari, and Sarah Fox; Carnegie Mellon University

Abstract: The recent excitement around generative models has sparked a wave of proposals suggesting the replacement of human participation and labor in research and development–e.g., through surveys, experiments, and interviews—with synthetic research data generated by large language models (LLMs). We conducted interviews with 19 qualitative researchers to understand their perspectives on this paradigm shift. Initially skeptical, researchers were surprised to see similar narratives emerge in the LLM-generated data when using the interview probe. However, over several conversational turns, they went on to identify fundamental limitations, such as how LLMs foreclose participants’ consent and agency, produce responses lacking in palpability and contextual depth, and risk delegitimizing qualitative research methods. We argue that the use of LLMs as proxies for participants enacts the surrogate effect, raising ethical and epistemological concerns that extend beyond the technical limitations of current models to the core of whether LLMs fit within qualitative ways of knowing.

Build Your AI Robot: Introduction to Robotics and AI Prototyping with Raspberry Pi

Course Leaders: Chris Zheng, The Bishop’s School; Ting Su and Ding Zhao, Carnegie Mellon University

Abstract: How can we design robots that intuitively understand and respond to human needs? This interdisciplinary course bridges robotics and human-computer interaction, teaching participants to build and program a Raspberry Pi-based robot controllable through flexible natural language prompts. Students will gain hands-on experience in rapid prototyping, computer vision, 3D sensing, and natural language processing, culminating in the creation of an interactive prompt-controlled navigation robot. By emphasizing practical HCI applications and intuitive user interface design, the course prepares students for the growing field of human-robot interaction, equipping them with valuable skills for designing next-generation interactive systems.