Workshop on Public Trust in Autonomous Systems

Full-day workshop at the 2025 IEEE International Conference on Robotics & Automation

Location: Meeting Room 406, Georgia World Congress Center, Atlanta, GA

Invited Speakers and Panelists

  • Hadas Kress-Gazit

    Hadas Kress-Gazit

    Cornell University
    Website
    Robotics, Motion Planning, Verification, HRI

  • Karen Leung

    Karen Leung

    University of Washington
    Website
    Autonomous Vehicles, Robotics, HRI, Trajectory Optimization

  • Mengyao Li

    Mengyao Li

    Georgia Institute of Technology
    Website
    Human Factors, Human-Robot Teaming; Trust in Automation

  • Roel Dobbe

    Roel Dobbe

    Delft University of Technology
    Website
    AI Safety, Sustainability, Justice, Public Policy

  • Bryant Walker Smith

    Bryant Walker Smith

    University of South Carolina
    Website
    Risk, Automation, Connectivity, Safety and Regulation

Description

Future robotic systems, such as autonomous vehicles and caregiving robots, are promising to revolutionize our homes, cities and roads—but how can we trust that they will be safe? How can society understand what type of assurances it should expect and demand from these technologies? Consider a cyclist suddenly falling in the path of an autonomous car. Must the car always avoid a collision? If it does not, is it an unfortunate edge case or an unacceptable safety failure calling for a fleet-wide recall?

The Workshop on Public Trust in Autonomous Systems (PTAS) aims to shed light on what assurances we can make—and demand—around the deployment of autonomous systems, informing the public conversation while these technologies are still at an early stage of development. We believe that for robots, autonomous vehicles, and AI systems to become part of our everyday lives, their safety must be as well understood as that of bridges, power plants, and elevators. The workshop aims to catalyze progress toward this goal by bringing technical and regulatory experts together for a focused day-long discussion, targeting new insights on what it would take to establish rigorous foundations for public trust in autonomous systems.


Official ICRA Website: https://2025.ieee-icra.org/event/icra-2025-workshop-on-public-trust-in-autonomous-systems

Invited Talks

What does safety mean for autonomous systems, and can we always guarantee it?

Speaker: Hadas Kress-Gazit

Abstract: In this (hopefully interactive) talk, I will discuss how I think about safety for autonomous systems that interact with people. I will discuss specifications in the context of human-robot interaction, and argue for run time verification, feedback and repair, as opposed to a-priori guarantees, as the mechanisms to increase public trust in autonomous systems.

Bio: Hadas Kress-Gazit is the Geoffrey S.M. Hedrick Sr. Professor at the Sibley School of Mechanical and Aerospace Engineering at Cornell University, and the Associate Dean for Diversity and Academic Affairs of Cornell’s College of Engineering. She received her Ph.D. in Electrical and Systems Engineering from the University of Pennsylvania in 2008 and has been at Cornell since 2009. Her research focuses on formal methods for robotics and automation and more specifically on synthesis for robotics - automatically creating verifiable robot controllers for complex high-level tasks. Her group explores different types of robotic systems including modular robots, soft robots and swarms and synthesizes ideas from different communities such as robotics, formal methods, control, hybrid systems and computational linguistics. She received an NSF CAREER award in 2010, a DARPA Young Faculty Award in 2012, Cornell Engineering’s Excellence in teaching award in 2013 and 2019, and excellence in research award in 2021. She is an IEEE Fellow and has served on DARPA’s Information Science and Technology study group (ISAT), as the program chair for Robotics: Science and Systems (RSS) 2018, the program chair for the International Conference on Robotics and Automation (ICRA) 2022, and the president of the RSS board (2019-2023), among other leadership positions in the robotics community.

References:

Towards trusted human-centric autonomy

Speaker: Karen Leung

Abstract: Autonomous robots are becoming increasingly prevalent in everyday life, from navigating our roads and sidewalks to assisting in households and warehouses. Yet building robots that can safely and fluently interact with humans in a trusted manner remains an elusive task. While humans are remarkably adept at seamlessly avoiding collision, even in crowded settings. In this talk, we will discuss how to use human interaction data to learn models that describe their interactions and discuss techniques to improve the safety and fluency of robot planning and control. First, I will discuss recent work that combines data-driven techniques with control-theoretic models to learn interpretable models of safe human-robot interactions. Second, I will discuss recent approaches to synthesizing robot trajectories that result in safe yet fluent interactions.

Bio: Karen Leung is an Assistant Professor and the Vagners & Christianson Endowed Faculty Fellow in Aeronautics & Astronautics at the University of Washington. She directs the Control and Trustworthy Robotics Lab (CTRL), which focuses on developing safe, intelligent, and trustworthy autonomous systems that can operate seamlessly with, alongside, and around humans. Before joining UW, Karen was a research scientist at NVIDIA, working in the Autonomous Vehicle Research Group, where she holds a partial appointment as a faculty scientist. Karen received her M.S. and Ph.D. in Aeronautics and Astronautics from Stanford University and a combined B.S./B.E. in Mathematics and Aerospace Engineering from the University of Sydney, Australia. She is a recipient of the UW + Amazon Science Hub Faculty Research Award, the William F. Ballhaus Prize for Best Ph.D. Thesis Award, and an Outstanding Undergraduate Research Mentor Award.

References:

From Trusted Technologies to Trustworthy Companies

Speaker: Bryant Walker Smith

Abstract: “Does the public trust the technology?” is a familiar but unhelpful question in discussions about particular innovations. I shift this inquiry to instead ask whether “the companies behind a given technology are worthy of the public’s trust.” To do this, I propose an affirmative theory of “the trustworthy company,” identify conduct that signals a lack of trustworthiness, and describe how existing legal doctrines can be reconceived as trust-based duties. Automated driving provides my principal case study throughout.

Bio: Bryant Walker Smith is an associate professor in the School of Law and (by courtesy) the School of Engineering at the University of South Carolina, as well as an affiliate scholar at the Center for Internet and Society at Stanford Law School. Trained as a lawyer and an engineer, Smith advises cities, states, countries, and the United Nations on emerging transport technologies. He coauthored the globally influential levels of driving automation, drafted a model law for automated driving, and taught the first legal course dedicated to automated driving (in 2012). Smith is currently writing on what it means for a company to be trustworthy. His publications are available at newlypossible.org. Before joining the University of South Carolina, Smith led the legal aspects of automated driving program at Stanford University, clerked for the Hon. Evan J. Wallach at the United States Court of International Trade, and worked as a fellow at the European Bank for Reconstruction and Development. He holds both an LL.M. in International Legal Studies and a J.D. (cum laude) from New York University School of Law and a B.S. in Civil Engineering from the University of Wisconsin. Prior to his legal career, Smith worked as a transportation engineer.

Trust is contagious: Understanding and shaping public trust in multi-human multi-agent teams

Speaker: Mengyao Li

Abstract: As autonomous systems become integral to high-stakes, real-world applications—from disaster response to mobility and healthcare—their integration into human teams introduces new challenges for building and sustaining public trust. While much research focuses on dyadic human-AI trust, real-world deployments often involve multi-human,multi-agent teams, where trust is not only individual but also social and emergent. In this talk, I will present empirical and computational findings demonstrating that trust is not static nor isolated—it is contagious. Through behavioral experiments and conversational analysis in human-human-AI teams, we show that one team member’s trust in an AI can significantly influence another’s trust behaviors, perceptions, and even linguistic alignment. These effects persist even when the AI exhibits failures, suggesting social trust signals can buffer or repair trust violations. I will also discuss the design implications of these findings for trustworthy AI, including how to leverage social dynamics to promote resilient trust in public-facing systems. By accounting for the interpersonal pathways through which trust propagates, we move closer to designing autonomous systems that are not only technically reliable, but also socially intelligent and publicly accepted.

Bio: Mengyao Li is an assistant professor at Georgia Tech in the School of Psychology and director of the Hybrid Intelligence (HI) Lab. Her research aims to understand, predict, and shape human-AI-robot communication, social cooperation, and long-term coevolution in safety-critical environments, including military operations, autonomous driving, space exploration, and social interactions with robots. She has written extensively on the human trust and team performance measurement using novel computational approaches. As PI or Co-PI on several university and DoD-DARPA funded grants, she has proposed and led the research by developing novel machine learning models using team-level bio-neuro-behavioral data to predict trust, team performance and dynamics. She has also served as the steering committee for the Center of Human-AI-Robot Teaming Center (CHART) at Georgia Tech to host invited talks, workshops, and forum.

AI safety is sociotechnical, but are our concept, methods and interventions?

Speaker: Roel Dobbe

Abstract: In this talk, I will share insights from the Sociotechnical AI Systems Lab at Delft University of Technology. In recent years, the risks emerging from deploying AI in high stakes or sensitive settings has motivated a flurry of measures and interventions to ensure safe and responsible use. However, most efforts have broadly fallen in either techno-centric, ethics-centric, or policy-centric buckets, and as such are often siloed and lack comprehensiveness. Algorithmic harms however can only be understood and prevented or addressed as dynamical or emergent phenomena for which we have to bring into one view technological tools and infrastructures, human actors at varying organizational levels, and the institutional factors be they more formal rules or informal norms such as hidden in culture or political behavior. Given the growing spectrum of algorithmic harms experienced across sectors and spheres of life and society, we urgently need to work to broadly shared conceptual lenses and associated methodologies for what we understand an ‘AI system’ and its harmful outcomes to be, and what aspects and factors need to be in view for responsible actors to understand these and own their responsibilities. To make this steep challenge concrete and offer viable strategies, I lean on lessons from the field of system safety which offers comprehensive concepts and methods for the anticipation, prevention and mitigation of harms in software-based systems. I reflect on the current state of the ‘Technical AI Safety’ field to inform a discussion on how sociotechnically oriented fields may work together to ensure research, policy and regulatory efforts for AI safety can become more comprehensive and effective for those in need of protection.

Bio: Roel Dobbe is an Assistant Professor in Technology, Policy & Management and Director of the Sociotechnical AI Systems Lab at Delft University of Technology. His research addresses the integration and implications of algorithmic technologies in societal infrastructure and democratic institutions, focusing on issues related to safety, sustainability and justice. His projects are situated in various domains, including energy systems, public administration, and healthcare. Roel’s system-theoretic lens enables addressing the sociotechnical and political nature of algorithmic and artificial intelligence systems across analysis, engineering design and governance, always with an aim to empower domain experts and affected communities. His results have informed various policy initiatives, including the European AI Act as well as the development of the algorithm watchdog in the Netherlands. Roel is also on the board of PublicSpaces, a Dutch foundation uniting more than forty public and civil society organizations working towards an internet that is not dependent on commercial extraction. Prior to Delft, Roel was a graduate student in the Berkeley AI Research (BAIR) Lab, receiving a PhD in Control & Intelligent Systems from the Department of Electrical Engineering and Computer Sciences at UC Berkeley (2018), where he received the Demetri Angelakos Memorial Achievement Award. Subsequently, he was an inaugural postdoc at the AI Now Institute in New York City.

References:

Author Information

Oral presentation: All accepted papers are invited to a 5-minute oral presentation. We require at least one author from each paper to attend the workshop in person and deliver the talk. We will announce the Best Presentation Award at the end of the workshop, which will be selected based on the quality of the presentation.

Poster session: All accepted papers are invited to a poster session. The poster should be no larger than 4' x 4'. We will announce the Best Poster Award at the end of the workshop, which will be selected based on the quality of the poster.

Organizers

  • Haimin Hu

    Haimin Hu

    Princeton University
    Website

  • Kaiqu Liang

    Kaiqu Liang

    Princeton University
    Website

  • Zixu Zhang

    Princeton University
    Website

  • Andrea Bajcsy

    Andrea Bajcsy

    Carnegie Mellon University
    Website

  • Jaime Fernández Fisac

    Jaime Fernández Fisac

    Princeton University
    Website

Sponsors