Our group has been part of several projects.

Ongoing projects

  • NSFC grant 61836005

Completed projects

  • Detecting Anomalies in Reactive Systems — Sino-German Collaboration
    • What is it that makes a program malicious? In this project, we investigate the hypothesis that a “malicious” program is one that does not work as advertised. Our idea is to make use of large collections of programs, especially apps in app stores, and to learn associations between advertised and implemented behavior from them. As advertised behavior, we make use of natural language descriptions as presented in user interface elements; as implemented behavior, we check the functionality triggered by these UI elements. The result is a model of actions and reactions that characterizes “normal” behavior. Given a new app, we can then check its model automatically whether the observed actions and reactions are “normal” or not. We can thus highlight problems related to security (the implementation does not work as advertised) as well as usability (the description does not match the implementation). During execution, a sandbox detects “abnormal” and explicitly disallowed sequences, and blocks the associated resource accesses and UI elements: “The ‘Download’ button is greyed out because it sends your address book to a server in Bezerkistan”. The project brings together expertise in program analysis, test generation, natural language processing, model inference, and model checking. It makes significant contributions in all these fields to achieve its overall goal of detecting and preventing abnormal behavior in reactive systems.
      Main researchers:

  • High Dependability of Deep Learning — Sino-German Collaboration
    • As the core technique of modern AI systems, deep learning has achieved significant breakthroughs in the past few years and been deployed in various applications. However, in safety-critical domains, faulty behavior carries the risk of endangering human lives or causing huge financial loss. To address these concerns, we investigate into the verification and testing technique on Deep Neural Networks (including Convolutional Neural Networks, Recursive Neural Networks etc.). Based on the techniques of formal methods, model checking, software testing and abstract interpretation, we strive to establish the formal theory of deep learning models, and eventually achieve the goal of improving the dependability of deep learning systems.
      Main researchers:

      • Lijun Zhang, ISCAS, Beijing, China
      • Zhong Ming, Shenzhen University, Shenzhen, China
      • Holger Hermanns, Saarland University, Saarbrücken, Germany
      • Meng Sun, Peking University, Beijing, China
      • Fei He, Tsinghua University, Beijing, China
      • Jun Sun, Singapore University of Technology and Design, Singapore, Singapore
      • Xiaowei Huang, University of Liverpool, Liverpool, United Kingdom
      • Andrea Turrini, Institute of Intelligent Software, Guangzhou, China
      • Taolue Chen, Birkbeck, University of London, London, United Kingdom
  • The CAP Sino-German Project
    • The project investigates quantitative analysis techniques for probabilistic and hybrid systems. These systems are nowadays used to study phenomena occurring in a wide spectrum of application domains, ranging from modern power grids over embedded hardware and software to transportation infrastructure.
      Main researchers:

  • The Inria-CAS Joint Research Project
    • This project aims to combine model checking, theorem proving, and process algebras so as to push forward the frontier of verification methods for probabilistic and quantum systems. For probabilistic systems, we will use probabilistic model checking as the main technique to verify several important cases such as fault tolerance systems, randomized network protocols, as well as dynamic, real-time and probabilistic (DRP) software architectures, which have recently received more and more attention. This technique is complemented by probabilistic bisimulations developed from process algebras, as bisimulation minimization can help to reduce the state space of a system before model checking is applied. For systems with infinite state-spaces, we resort to interactive theorem proving based on deductive reasoning instead of fully automated theorem proving. Unlike probabilistic systems, the theoretical foundation for verifying quantum systems is not yet mature. We will investigate how to generalize existing methods for probabilistic systems to the quantum setting.
      Main researchers: