Suguman Bansal

Tutorial Speaker Suguman Bansal

Suguman Bansal is a is an assistant professor in the School of Computer Science at Georgia Institute of Technology. Her research interests lie at the intersection of Artificial Intelligence and Programming Languages. Specifically, she works on developing tools and techniques to improve the quality of automated verification and synthesis of computational systems. Her recent work concerns providing formal guarantees about learning-enabled systems with a focus on Reinforcement Learning.

Bansal received her Ph.D. (2020) and M.S. (2016) in Computer Science from Rice University, and B.S. (with Honors) degree (2014) in Mathematics and Computer Science from Chennai Mathematical Institute. She is the recipient of the ATVA Best Paper Award 2023, Future Faculty Fellowship 2019, MIT EECS Rising Stars 2021, Andrew Ladd Fellowship 2016, and a Gold Medal at the ACM Student Research Competition at POPL 2016.

Talk

Specification-Guided RL


Room:

The unprecedented proliferation of data-driven approaches, especially machine learning, has put the spotlight on building trustworthy AI through the combination of logical reasoning and machine learning. Reinforcement Learning from Logical Specifications is one such topic where formal logical constructs are utilized to overcome challenges faced by modern RL algorithms. Research on this topic is scattered across venues: Foundational work has appeared at formal methods and AI venues; Algorithmic development and applications have appeared at machine learning, robotics, and cyber-physical systems venues. This tutorial consolidates recent progress in one capsule for a typical Formal Methods researcher. The tutorial is designed to explain the importance of using formal specifications in RL and encourage researchers to apply existing techniques for RL from logical specifications as well as contribute to the growing body of work on this topic.

In this tutorial, we introduce reinforcement learning as a tool for automated synthesis of control policies and discuss the challenge of encoding long-horizon tasks using rewards. We then formulate the problem of reinforcement learning from logical specifications and present recent progress in developing scalable algorithms as well as theoretical results demonstrating the hardness of learning in this context.

All invited speakers