One of the main goals of our workshop is to bridge the gap between Computational Cognitive & Behavior Science, Explainable AI, Transportation, and the Autonomous Driving community. Our SIAM workshop mainly targets theoretical frameworks and practical algorithms of perception, decision-making, and planning integrated with social factors and computational cognitive science to enable autonomous vehicles (AVs) to interact with human agents in a socially compatible way. Specifically, the topics are as follows, but not limited to:

  • Applications of AVs interacting with human agents;
  • Algorithms of perception, decision-making, planning for human-like AVs;
  • Cognitive aspects and models for autonomous driving;
  • Cognitive and mental modeling toward socially driving, e.g., Theory of Mind and Theory of Machine;
  • Social cues for AVs in interactive driving tasks;
  • Action-reaction cycle modeling and validation;
  • Explainable interaction and planning in interactive driving tasks;
  • Evaluation and quantification of inter-human interactions and their implementations to human-AV interactions;
  • Human driving behavior/intention modeling, simulation, and analysis;
  • Heterogeneous human-agent teams;
  • Interactive traffic scenes analysis;
  • Interaction pattern learning, extraction, and recognition;
  • Interactive simulations and humans-in-the-loop simulations;
  • Learning-based theory for social interaction among human drivers;
  • Social and group intelligence in multiple human agent interaction;
  • Spatiotemporal driving behaviors in interactive traffic scenes;

Call for paper

Authors are invited to submit full-length papers up to 6 pages for technical content including figures and references. Additional pages will be charged at the rate of $100 per page and is limited to two pages per paper. Each paper will undergo a peer-reviewing process by at least two independent reviewers. Contributions will be reviewed according to relevance, originality and novel ideas, technical soundness and quality of presentation. Each accepted paper must be covered by at least one non-studentregistration. Additional papers by the same authors will be charged at the flat rate of $400 per paper. To maximize visibility and impact, all accepted papers will be published in IEEE Xplore digital library through Open Preview and will be freely accessible and downloadable by all, in final format, beginning one month prior to the conference and through the conference end date.

  • Registration for attending the workshop-day only will be for a separate fee, except for IEEE ITSS members, they will receive free attendance to the workshop-day.
  • In case of accepted workshop paper, one author has to pay the full publication fee to include the paper into the proceedings.
    Important Deadlines:
  • February 01, 2023 (firm deadline, no extension): Workshop Paper Submission Deadline
  • March 30, 2023: Workshop Paper Notification of Acceptance
  • April 22, 2023: Workshop Final Paper Submission Deadline

Program (8:30 a.m. - 12:45 p.m., June 4th, 2023)

The program of this workshop includes 7 talks in several sessions.
- The talks will be streaming online via: zoom meeting.
- Recordings of all talks will be available on the SIAM YouTube channel.
- 👉 Check out our Flyer!

Time Speaker Topic (click to see more details)
8:30-8:35 Organizers Openning
8:35-9:15 Cathy Wu
Massachusetts Institute of Technology
Intelligent Coordination for Sustainable Roadways – If Autonomous Vehicles are the Answer, then What is the Question? Abstract: For all its hype, autonomous vehicles have yet to make our roadways more sustainable: safer, cheaper, cleaner. This talk suggests that key to unlocking sustainable roadways is to shift the focus from autonomy-driven design to use-driven design. Based on recent work, the talk focuses on three critical priorities––safety, cost, and environment––each leveraging the 'autonomy' capability of coordinating vehicles. But fully autonomous agents are not the only entities that can coordinate. A paragon of safety is air traffic control, in which expert operators remotely coordinate aircraft. The work brings these ideas to the dense traffic on roadways and analyzes the scalability of operators. Another much cheaper way to coordinate is to give a smartphone app to drivers. The work characterizes how well lower-tech systems can still achieve autonomous capabilities. For cleaner roadways, dozens of articles have considered coordinating vehicles to reduce emissions. This work models whether doing so would move the needle on climate change mitigation goals. To study these multi-agent coordination problems, the work leverages queueing theory, Lyapunov stability analysis, transfer learning, and multi-task reinforcement learning. The talk will also substantiate issues of robustness that arise when applying learning-based techniques and a new line of work designed to address them. Overall, the results indicate promise for intelligent coordination to enable sustainable roadways.
9:20-9:50 Gustav Markkula
University of Leeds
Adopting Knowledge and Models from Cognitive Neuroscience to Enable Socially Interactive Automation Abstract: It is becoming increasingly clear that human interaction in traffic is underpinned by a number of non-trivial perceptual, cognitive, and motor mechanisms, which both constrain and determine the behaviour of human road users. In this talk I will give an overview of a number of such mechanisms, which have been mathematically modelled in cognitive neuroscience, and which our team and others have adopted into models of human road user behaviour. Finally, I will discuss to what extent, and for what uses, it may be important to consider these types of underlying human mechanisms in the development and testing of automated vehicles
9:55-10:25 Igor Gilitschenski
University of Toronto
Inductive Biases for Safe Interactive Autonomy Abstract:
10:30-11:00 Yan Chang
Automatic Parameters Tuning for Autonomous Systems Abstract: Thousands of parameters need to be tuned across the autonomy stack and beyond for the autonomous driving systems. It's challenging to balance various requirements such as safety, progress, and comfort. In this talk, we will share an Automatic Parameters Tuning System for Autonomous Driving Systems. Four Key takeaways: (1) A generic framework for parameter tuning for different use cases. (2) A scalable framework that can use a large amount of scenes to do the auto-tuning and validation. (3) A modularized architecture. (4) A user-friendly end-to-end interface for experiments, visualization, and analysis.
11:05-11:35 Yuxiao Chen
How to plan with prediction: a policy planning perspective Abstract: In a typical autonomous vehicle (AV) stack, motion predictions are consumed by the planning module to generate safe and efficient motion plan for the AV. While deep learning took the field of prediction by storm and kept improving the SOTA of prediction accuracy, it is unclear how they are helping the subsequent motion plan. This talk focuses on how prediction models are used together with the downstream planning module and showed that one key factor to improving the closed-loop performance is via policy planning, that is, planning a motion policy instead of a single trajectory. Our recent works use prediction models to generate scenario trees and then plan tree-structured motion policies capable of reacting to the environment behavior. Thanks to the reactiveness, we showed that policy planning significantly outperforms the traditional benchmarks in closed-loop simulation. As expected, the increased complexity leads to higher computational cost, and we will discuss the limitations of policy planning in the talk as well.
11:35-12:05 Olger Siebinga
Delft University of Technology
How Risk Perception and Communication can be used to Model Human Traffic Interactions Abstract: Interaction-aware autonomous driving is mostly achieved by including models of human driving behavior in autonomous vehicles. These models can predict the future actions of human traffic participants, and based on this information, the autonomous vehicle can better handle interactions in traffic. However, many of these approaches are based on the concept of Game Theory and therefore make strong assumptions about human behavior. For example, the assumptions that humans are rational, and do not communicate. In this talk, I will present an alternative approach to modeling human behavior in traffic interactions. We will discuss how perceived risk is a driving factor behind human behavior in non-interactive scenarios, and how communication plays an important role in interactions. Combined with Simon’s ideas of bounded rationality and satisficing, these concepts led to our novel modeling approach: the communication-enabled interaction model.
12:10-12:40 Katherine Driggs-Campbell
University of Illinois Urbana-Champaign
People as Sensors: A social inference approach for occlusion-aware autonomy Abstract: Autonomous vehicles have the potential to change the foundations of our way of life. However, the desirable impacts of autonomy are only achievable if they can effectively interact with human agents and behave in similar ways. One key challenge is operating with limited sensing in partially observable environments, where occluded human agents are prevalent. Humans (often) intuitively infer the presence of occluded obstacles and agents simply by observing how nearby drivers are behaving. Much like humans in the real world who observe other drivers to make inferences, we have designed a framework that treats human drivers as sensors to improve map estimation, as a proxy for detection. Our method handles multi-agent scenarios, combining measurements from multiple observed drivers using evidential theory to solve the sensor fusion problem. We demonstrate our methods on real-world traffic data showing effective inference in complex multi-agent environments and translate deployment to a mobile robot in a crowd navigation setting.
12:40-12:45 Organizers Discussion and conclusions

Invited Speakers

Cathy Wu
Massachusetts Institute of Technology

Cathy Wu is an Assistant Professor at MIT in LIDS, CEE, and IDSS. She holds a Ph.D. from UC Berkeley, and B.S. and M.Eng. from MIT, all in EECS, and completed a Postdoc at Microsoft Research. Her research interests are at the intersection of machine learning, decision making, and mobility. Her current work focuses on how learning-based methods can advance emerging mobility systems by better coping with the complexity of decisions and control. She is broadly interested in enabling policy-relevant research by pushing the boundaries of learning, control, and optimization. Cathy has received a number of awards, including the NSF CAREER, dissertation awards, and publications with distinctions. Her work has appeared in the press, including NOVA, Wired, Science Magazine, the MIT Homepage, and TEDxMIT.

Gustav Markkula
University of Leeds

Prof. Gustav Markkula is an engineer by training, and applies quantitative methods and models to the study of human behaviour and cognition in road traffic. He has a background in automotive industry R&D (Volvo), and is currently Chair in Applied Behaviour Modelling at the Institute for Transport Studies, University of Leeds, UK. In his research, he specialises in the adoption and integration of models from computational cognitive neuroscience, to support development and testing of safe and human-acceptable technology and automation.

Igor Gilitschenski
University of Toronto

Igor Gilitschenski is an Assistant Professor of Computer Science at the University of Toronto where he leads the Toronto Intelligent Systems Lab. He is also a (part-time) Research Scientist at the Toyota Research Institute. Prior to that, Dr. Gilitschenski was a Research Scientist at MIT’s Computer Science and Artificial Intelligence Lab and the Distributed Robotics Lab (DRL) where he was the technical lead of DRL’s autonomous driving research team. He joined MIT from the Autonomous Systems Lab of ETH Zurich where he worked on robotic perception, particularly localization and mapping. Dr. Gilitschenski obtained his doctorate in Computer Science from the Karlsruhe Institute of Technology and a Diploma in Mathematics from the University of Stuttgart. His research interests involve developing novel robotic perception and decision-making methods for challenging dynamic environments. He is the recipient of several best paper awards including at the American Control Conference, the International Conference of Information Fusion, and the Robotics and Automation Letters.

Yan Chang

Yan Chang is a Tech Lead Manager and Software Engineer at Zoox, where she applies planning and machine learning techniques to solve autonomous driving for dense urban environments. She serves as an associate editor of IEEE Transactions of Transportation Electrification. She holds a Master degree and Ph.D. degree in Mechanical Engineering from the University of Michigan - Ann Arbor.

Yuxiao Chen

Yuxiao Chen is a senior research scientist in the autonomous vehicle research group at Nvidia. His research interest includes safe autonomy, motion planning for AV, especially policy planning, and generative AI for closed-loop simulation and scene generation of AV. He did his undergraduate in Tsinghua University, obtained his Ph.D. from the University of Michigan in 2018, and was a postdoc at Caltech from 2018 to 2021. He joined Nvidia research in 2021.

Olger Siebinga
Delft University of Technology

Olger Siebinga is a PhD candidate at Delft University of Technology, working in the field of human-robot interaction. With a background in mechanical engineering, he has always been fascinated by the combination of mechanical, electrical, and software engineering. But throwing humans in the mix is what makes things really interesting. Many robots can behave safe and optimal in their own perfect, isolated world. But to make modern robots, such as automated vehicles, function in the real world, they must be able to interact with humans in a safe and natural manner. This is where his current work focuses on: understanding human (driving) behavior and describing it in a mathematical way such that automated vehicles can make decisions based on their understanding of humans.

Katherine Driggs-Campbell
University of Illinois Urbana-Champaign

Katie Driggs-Campbell is currently an assistant professor and Bruning Faculty Fellow at the University of Illinois at Urbana-Champaign in the Department of Electrical and Computer Engineering. Prior to joining UIUC, she received a B.S.E. with honors from Arizona State University in 2012, a M.S. and PhD from UC Berkeley in 2015 and 2017, respectively, and was a postdoc in the Stanford Intelligent Systems Laboratory. Katie now runs the Human-Centered Autonomy Lab, which aims to design safe autonomous systems and robots that can safely interact with people out in the real-world. She is a recent recipient of the NSF CAREER award and IEEE RAS Early Academic Career Award.

Supported by