Research Topics on Trustworthy AI
We are actively working on the following research directions in the area of trustworthy AI:
- neural network verification
- neural network control system verification
- possibility robustness analysis
- shielding
Neural network verification
Given a neural network, a input constraint, and a property, the neural network verification problem is to determine whether there’s an input that satisfies the constraint and violates the property. We are interested in the following research topics:
- developing efficient and scalable algorithms for verifying the safety and reliability of different types of neural networks, such as deep neural networks, quantized neural networks, and reinforcement learning networks;
- developing different verification algorithms, such as GPU-based, SMT-based, incremental verification, CEGAR, etc.; and
- developing techniques for training and verifying the safety and reliability of neural networks used in autonomous driving, medical diagnosis, and other safety-critical applications.
Neural network control system verification
Given a neural network controller and a system dynamics, the neural network control system verification problem is to determine whether the system dynamics avoid unsafe states under the control of the neural network controller. We are working on training verifiable neural networks controller for specific tasks in autonomous driving and other safety-critical applications, such as following a vehicle, and demonstrating the safety of applying this neural network controller in those tasks.
Possibility robustness analysis
When permits networks to make mistakes with a small probability with a high confidence, we can verify more complex and larger neural networks. We are working on developing techniques for providing possibility robustness guarantees for AI systems, such as autonomous driving systems or object detection algorithms, using PAC model learning and verification.
Shielding
Shielding is a common method used to ensure the safety of the system under a black-box controller (such as a neural network controller from deep reinforcement learning (DRL)), usually relying on simpler, verified controllers. We are working on developing techniques for shielding neural network control systems to ensure the safety and reliability of AI systems.