CyLab researchers to present at ACM CHI 2026
Michael Cunningham
Mar 30, 2026
CyLab researchers are set to present four papers, serve on one panel, and lead three workshops at the upcoming Association for Computing Machinery (ACM) Conference on Human Factors in Computing Systems (CHI 2026).
The conference will take place in Barcelona from April 13th through the 17th, bringing together researchers, practitioners, and industry leaders to share their latest work and ideas and to foster collaboration and innovation in advancing digital technologies.
In addition to CyLab’s researchers, more than 170 researchers from across Carnegie Mellon University are authors or co-authors of papers and posters, or will be presenting at panels and workshops at CHI 2026.
Lorrie Cranor, CyLab director, will also deliver an invited talk and be presented with the 2026 ACM SIGCHI Lifetime Research Award, one of the highest honors in the field of human-computer interaction (HCI).
Below, we’ve compiled a list of papers, panels, and workshops that are being presented by CyLab researchers at this year’s event.
Papers
Privy: Envisioning and Mitigating Privacy Risks for Consumer-facing AI Product Concepts
Authors: Hao-Ping (Hank) Lee, Carnegie Mellon University; Yu-Ju Yang, University of Illinois
Urbana-Champaign; Matthew Bilik, University of Washington; Isadora Krsek, Thomas Serban von Davier, Kyzyl Monteiro, Jason Lin, Shivani Agarwal, Jodi Forlizzi, and Sauvik Das, Carnegie Mellon University
Abstract: AI creates and exacerbates privacy risks, yet practitioners lack effective resources to identify and mitigate these risks. We present Privy, a tool that guides practitioners without privacy expertise through structured privacy impact assessments to: (i) identify relevant risks in novel AI product concepts, and (ii) propose appropriate mitigations. Privy was shaped by a formative study with 11 practitioners, which informed two versions — one LLM-powered, the other template-based. We evaluated these two versions of Privy through a between-subjects, controlled study with 24 separate practitioners, whose assessments were reviewed by 13 independent privacy experts. Results show that Privy helps practitioners produce privacy assessments that experts deemed high quality: practitioners identified relevant risks and proposed appropriate mitigation strategies. These effects were augmented in the LLM-powered version. Practitioners themselves rated Privy as being useful and usable, and their feedback illustrates how it helps overcome long-standing awareness, motivation, and ability barriers in privacy work.
Passing Down Passwords: How Older Adults Approach Postmortem Account Access and Digital Estate Planning
Authors: Jenny Tang, Xiaoyuan Wu, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor, Carnegie Mellon University
Abstract: Traditional estate planning practices enable people to provide their heirs access to the assets left behind but are often insufficient for the transfer and management of online accounts. To understand how estate planning practices could be improved, we conducted 21 semi-structured interviews with older adults in the United States that explored their practices, concerns, and needs regarding postmortem online account access and management. We encountered few formalized digital estate planning practices; many participants use their credential management practices—primarily pen-and-paper—to provide postmortem account access. How participants envision account transfer is motivated by trust in their current practices and in their heirs, while concerns regarding technology hinder adoption of new methods. Participants consistently prioritize accounts with financial assets, and expectations surrounding postmortem account management vary based on individual circumstances, with the common goal of reducing burdens on executors and heirs. Our results suggest the need for developing technical standardization and expert guidance for digital estate planning.
Supporting Informed Self-Disclosure: Design Recommendations for Presenting AI-Estimates of Privacy Risks to Users
Authors: Isadora Krsek and Meryl Ye, Carnegie Mellon University; Wei Xu and Alan Ritter, Georgia Institute of Technology; Laura Dabbish and Sauvik Das, Carnegie Mellon University
Abstract: People candidly discuss sensitive topics online under the perceived safety of anonymity; yet, for many, this perceived safety is tenuous, as miscalibrated risk perceptions can lead to over-disclosure. Recent advances in Natural Language Processing (NLP) afford an unprecedented opportunity to present users with quantified disclosure-based re-identification risk — i.e., “population risk estimates” (PREs). How can PREs be presented to users in a way that promotes informed decision-making, mitigating risk without encouraging unnecessary self-censorship? Using design fictions and comic-boarding, we story-boarded five design concepts for presenting PREs to users and evaluated them through an online survey with 𝑁= 44 Reddit users. We found participants had detailed conceptions of how PREs may impact risk awareness and motivation, but envisioned needing additional context and support to effectively interpret and act on risks. We distill our findings into four key design recommendations for how best to present users with quantified privacy risks to support informed disclosure decision-making.
My Money, Your Name: Challenges and Workarounds in ID-Required Mobile Money in East Africa
Authors: Edith T Luhanga, Carnegie Mellon University Africa; Karen Sowon, Indiana University; Giulia Fanti, Conrad Tucker, and Assane Gueye; Carnegie Mellon University
Abstract: Mobile money (MoMo) services have increased access to financial services in low- and middle- income countries (LMICs). However, requirements to register SIM cards with a government-issued identification have left around 18% of users, most without IDs, banking under a third-party’s name. Through interviews with 72 urban and rural residents in Kenya and Tanzania, this study provides the first in-depth assessment of how third-party SIM cards are acquired and the challenges and workarounds that arise when using them for MoMo. We document how third-party SIM users use various intermediaries—friends, family, agents, and strangers—-to access services and the effects of ID and account misuse by both third-party SIM users and intermediaries. We further outline the personal and systemic challenges that lead to the lack of IDs for SIM registration and discuss how digitization, now underway in both Kenya and Tanzania, should be approached to effectively address these barriers.
Panels
Does Peer Review Need to Change? A Panel on Reporting Standards and Checklists in the Age of AI
Panelists: Andrew Duchowski, Clemson University; Andreas Stefik, University of Nevada Las Vegas; Paul Ralph, Dalhousie University; Alan Dix, Swansea University; Krzysztof Krejtz, SWPS University of Social Sciences and Humanities; Brad A. Myers, Carnegie Mellon University; Joaquim Jorge, Universidade de Lisboa; and Rina R. Wehbe, Dalhousie University
Description: Many scientific fields of study use formally established reporting standards to foster research and experimental design, transparency, replicability, peer review, and student training. Examples include CONSORT in medicine, the What Works Clearinghouse in education, JARS in psychology. Such standards yield agreement on study reporting and evaluation, even if using different methodologies. CHI has not adopted reporting standards. Like other fields, CHI has seen an increased number of low-quality submissions and reviews fueled by AI. This panel’s objective is to discuss advantages and barriers of adopting reporting standards for SIGCHI. Panelists include representatives with significant experience creating, adopting and operationalizing reporting standards in adjacent fields: software engineering, CS education, and Programming Languages. The panel will include an overview of the history of reporting standards, a live demo of a standards-based peer review system, discussions of opportunities, challenges, limitations for SIGCHI reporting standards, and an interactive discussion between attendees and panelists.
Workshops
Human Expertise for AI Red-Teaming and Scalable Evaluation
Organizers: Alice Qian, Carnegie Mellon University; Srravya Chandhiramowuli, University of Edinburgh; Laura Dabbish and Hong Shen, Carnegie Mellon University; Alex S Taylor, University of Edinburgh; Ding Wang, Google; Theodora Skeadas, Humane Intelligence; and Bolor-Erdene Jagdagdorj, Microsoft
Description: Rapid adoption of generative AI has outpaced the infrastructure needed to red team systems responsibly. This workshop tackles a core tension: scaling AI red teaming while centering human expertise and well-being. We convene academic, industry, and nonprofit practitioners for two threads. (A) Vision: surface high-level goals and principles for effective, humane red teaming. (B) Build: identify opportunities to support human-AI red teaming, such as scenario libraries, role prompts for red teamers, and calibration methods that align automated efforts with human expertise. Through this workshop, we will develop a vision for the future of effective AI red teaming that leverages and protects human expertise while meeting the needs of evaluation at scale.
Human-AI Interaction Alignment: Designing, Evaluating, and Evolving Value-Centered AI For Reciprocal Human-AI Futures
Organizers: Hua Shen, New York University; Tiffany Knearem, MBZUAI; Divy Thakkar, Google; Pat Pataranutaporn, Massachusetts Institute of Technology; Anoop K. Sinha, Google; Yike Shi and Jenny T Liang, Carnegie Mellon University; Lama Ahmad, OpenAI; Tanushree Mitra, University of Washington; Brad A. Myers, Carnegie Mellon University; and Yang Li, Google DeepMind
Description: The rapid integration of generative AI into everyday life underscores the need to move beyond unidirectional alignment models that only adapt AI to human values. This workshop focuses on bidirectional human-AI alignment, a dynamic, reciprocal process where humans and AI co-adapt through interaction, evaluation, and value-centered design. Building on our past CHI 2025 BiAlign SIG and ICLR 2025 Workshop, this workshop will bring together interdisciplinary researchers from HCI, AI, social sciences and more domains to advance value-centered AI and reciprocal human-AI collaboration. We focus on embedding human and societal values into alignment research, emphasizing not only steering AI toward human values but also enabling humans to critically engage with and evolve alongside AI systems. Through talks, interdisciplinary discussions, and collaborative activities, participants will explore methods for interactive alignment, frameworks for societal impact evaluation, and strategies for alignment in dynamic contexts. This workshop aims to bridge the disciplines’ gaps and establish a shared agenda for responsible, reciprocal human-AI futures.
PoliSim@CHI 2026: LLM Agent Simulation for Policy
Organizers: Yuxuan Li, Wesley Hanwen Deng, and Xuhui Zhou, Carnegie Mellon University; Kevin Klyman, Stanford University; Chun Yu and Yuanchun Shi, Tsinghua University; Nicholas Vincent, Simon Fraser University; Amy X. Zhang, University of Washington; Maarten Sap, Sauvik Das, and Hirokazu Shirado; Carnegie Mellon University
Description: Recent advances in large language models (LLMs) have opened new possibilities for simulating complex social interactions at scale. LLM agent simulations can embed institutional realities and local knowledge while representing heterogeneous agents with diverse reasoning and communication styles. These capabilities make them promising testbeds for policy, offering ways to stress-test interventions and anticipate unintended consequences. Despite this potential, institutional uptake has been limited. Drawing on HCI traditions such as participatory and user-centered design, we argue that in policy contexts, the usefulness of LLM agent simulations is unlikely to increase linearly with technical sophistication. Instead, it emerges through iterative, stakeholder-engaged design, as policymakers build trust, discover system limits, and recalibrate expectations. This workshop will bring together HCI and NLP researchers, policymakers, and designers to explore how LLM agent simulations can become genuinely useful for policy implementation. We will examine how simulations can be made more useful for policy, how they can be used responsibly, and how simulation capabilities and policy requirements can co-evolve at scale. Through keynotes, presentations, and interactive sessions, participants will share case studies, identify challenges, and chart a cross-disciplinary agenda for technical development, institutional trust, and adoption.