SETSS 2025

Tutorial

Safeguarding Deep Reinforcement Learning Systems via Formal Methods: From Safety-by-Design to Runtime Assurance

Min Zhang

on  Mon, 14:00in  Room 402for  90min on  Mon, 16:00in  Room 402for  90min

Deep neural networks (DNNs) have shown remarkable potential in decision-making and control for deep reinforcement learning (DRL) systems, yet their complexity and lack of transparency can make it challenging to ensure the safety of their hosting environments, including the systems they operate on. Drawing on our experiences, we argue that formal methods are crucial in training AI models that are not only robust but also certifiable, ensuring system safety at every stage from training to deployment. We demonstrate that integrating formal methods is essential to provide a comprehensive safety guarantee for DRL systems throughout their lifecycle.

 Overview  Program