Partially due to the recent breakthrough in deep learning, artificial intelligence is penetrating into safety-critical systems and these systems are becoming more and more intelligent. Notable examples of safety-critical intelligent systems are autonomous vehicles, e.g. self-driving cars and unmanned aircrafts. It is of paramount importance to guarantee the trustworthiness of these systems, since errors in these systems may entail catastrophic consequences, e.g. deaths of human beings. Our group has been contributing to developing theories, techniques, and tools for guaranteeing the trustworthiness of intelligent systems. In the sequel, we state the main research directions of our group:
- Formal methods for Artificial Intelligence: We investigate robustness verification of various neural networks, such as DNNs, RNNs and GNNs. We apply techniques such as abstract interpretation, counterexample guided refinement. We apply model learning techniques in this context to learn approximating models, which also provide tools for neural model explanations. We have implemented a platform PRODeep, for analyzing various robustness properties of DNNs. It incorporates constraint-based, abstraction-based, and optimisation-based robustness verification algorithms. It has a modular architecture, enabling easy comparison of different algorithms.
- Verification of RISC-V CPU designs: In order to improve the performance, more and more intelligent systems are using domain-specific CPUs. RISC-V is an open instruction set that supports rapid development of domain-specific CPUs and becoming increasingly popular. We focus on Chisel, a hardware construction language embedded in Scala for modular and parametric design of RISC-V CPUs. We investigate the techniques to achieve the automated verification of Chisel RISC-V CPU designs against the instruction-set compatibility and temporal properties. Moreover, we consider the formal verification of the compilation process of FIRRTL (an intermediate representation for Chisel) in proof assistants (e.g. Coq). More information can be found here.
- Probabilistic Model Checking: We work on algorithms for verifying properties of probabilistic systems, and applying them in applications such as networked systems, differential privacy protocols. We have implemented our algorithms in our tool ePMC. It supports Markov chains and Markov decision Processes (MDP), and properties specified in PCTL, PLTL and their combination PCTL*. ePMC is highly modular, which allows everyone to extend and improve the functionalities of the model checker. For instance, we have extended it to dpCTL: a tool for checking differential privacy CTL properties on Markov chains and Markov decision processes, and Petl: a tool for model checking probabilistic epistemic logic for probabilistic multiagent systems, and QMC: a model checker for quantum Markov chains.
Publications and Tools
- PRODeep: a platform for robustness verification of deep neural networks.
- ePMC: the successor of IscasMC, a tool for verifying properties over probabilistic systems.
- ROLL: a library of learning algorithms for ω-regular languages. It consists of some learning algorithms for FDFAs and for Büchi automata.
- SVMRanker: a tool for synthesizing multiphase/nested ranking functions of loop programs based on SVM.
The source code of most of the tools can be found here.
The research has been supported by several ongoing and completed projects.