Andrea Bajcsy

Andrea Bajcsy

Assistant Professor, Carnegie Mellon University Robotics Institute

Talk Title

Towards Open World Robot Safety

Abstract

Robot safety is a nuanced concept. We commonly equate safety with collision-avoidance, but in complex, real-world environments (i.e., the “open world’’) it can be much more: for example, a mobile manipulator should understand when it is not confident about a requested task, that areas roped off by caution tape should never be breached, and that objects should be gently pulled from clutter to prevent falling. However, designing robots that have such a nuanced safety understanding---and can reliably generate appropriate actions---is an outstanding challenge. In this talk, I will describe my group’s work on systematically uniting modern machine learning models (such as large vision-language models and latent world models) with classical formulations of safety in the control literature to generalize safe robot decision-making to increasingly open world interactions. Throughout the talk, I will present experimental instantiations of these ideas in domains like vision-based navigation and robotic manipulation.

Bio

Andrea Bajcsy is an Assistant Professor in the Robotics Institute at Carnegie Mellon University where she leads the Interactive and Trustworthy Robotics Lab (Intent Lab). She broadly works at the intersection of robotics, machine learning, control theory, and human-AI interaction. Prior to joining CMU, Andrea received her Ph.D. in Electrical Engineering & Computer Science from University of California, Berkeley in 2022. She is the recipient of the NSF CAREER Award (2025), Google Research Scholar Award (2024), Rising Stars in EECS Award (2021), Honorable Mention for the T-RO Best Paper Award (2020), NSF Graduate Research Fellowship (2016), and worked at NVIDIA Research for Autonomous Driving.


Kassem Fawaz

Kassem Fawaz

Associate Professor, University of Wisconsin–Madison Department of Electrical & Computer Engineering

Talk Title

Exploring LLMs for Privacy-Aware Social Companion Robots

Abstract

Social robots are embodied agents that engage with people following human norms of communication. They listen and speak with people, interact using non-verbal cues, and share the physical environment with them. Without privacy awareness, social robots cannot meet user expectations regarding how they collect, process, and share information in their operating environment. For example, a social robot can share information from group interactions with other family members, occupants, or visitors of the home. In three parts, this talk discusses our current work at establishing design principles for privacy-aware social robots. The first part describes our analysis of family preferences when sharing access to autonomous agents, such as ChatGPT. The second part discusses our efforts to understand whether recent advances in large language models (LLMs) can enhance privacy awareness in robots. The third part discusses our ongoing work on co-designing approaches to signal and express privacy awareness to social robots.

Bio

Kassem Fawaz is the Grainger Institute of Engineering Associate Professor in the Electrical and Computer Engineering department at the University of Wisconsin–Madison, where he serves as the inaugural associate chair for research. He earned his Ph.D. in Computer Science and Engineering from the University of Michigan. His research interests include the security and privacy of user interactions with AI-powered systems. He was awarded the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies in 2019. He also received the National Science Foundation CAREER award in 2020, the Google Android Security and PrIvacy REsearch (ASPIRE) award in 2021, the Facebook Research Award in 2021, the Chancellor Teaching award in 2022, and the Vilas Associates Award in 2024. His research has been funded by the National Science Foundation, the Federal Highway Administration, and the Defense Advanced Research Projects. His work on privacy has been featured in several media outlets, such as the BBC, Wired, the Wall Street Journal, the New Scientist, and ComputerWorld.


Philip Koopman

Philip Koopman

Associate Professor, Carnegie Mellon University Department of Electrical and Computer Engineering

Talk Title

Autonomous Vehicle Safety

Abstract

This talk will give an overview of autonomous vehicle safety, including: getting past the safety rhetoric, safety engineering in a nutshell, why machine learning breaks safety engineering, core ML-related problems for life-critical system safety, the approach of the ANSI/UL 4600 standard for autonomous system safety evaluation, and considerations beyond technical safety metrics.

Bio

Philip Koopman of Carnegie Mellon University is an internationally recognized expert on Autonomous Vehicle (AV) safety whose work in that area spans almost 30 years. He is has also worked extensively in more general embedded system design, software quality, and safety across numerous transportation, industrial, and defense application domains including conventional automotive software and hardware systems. He originated the UL 4600 autonomous vehicle safety standard, and received the Industry Legend award at the 2024 the Self-Driving Industry Awards.