Building Safe Autonomous Systems using Imperfect Components
Abstract
Modern autonomous systems are an ensemble of multiple components implementing machine learning, control, scheduling, and security. Current design flows aim for each of these components to work perfectly, and system design consists of composing these components together. As a result, research in machine learning aims towards near-perfect classification or estimation, scheduling techniques aim to meet all deadlines, and security algorithms aim towards fully secure systems. While such separation of concerns has served us well till now, as systems become more complex, this goal towards achieving perfection is becoming unreasonable. In this mini-course we will argue that we can design safe autonomous systems, without requiring its components to be perfect – as long as the imperfections of one component are balanced by suitable actions from other components. Such a design approach is potentially more reasonable and cost effective, and we will provide examples of how it plays out.
Course outline
Background topics
- Feedback controllers for linear time-invariant systems
- Basics of timing analysis for distributed embedded systems
- Incorporating delays in controller design
Advanced topics
- Controller schedule synthesis with deadline violations
- Controllers with ML components
- Controller synthesis for edge-cloud perception processing
- Open problems
Audience background
The course will assume the audience to have a general computer science background and familiarity with basic calculus, linear algebra, and formal languages & automata theory. No background in control theory is assumed. The exposition of this mini-course will primarily be formal/mathematical.