Discussion meeting
This is a summary of the group meeting on 16 December 2020
Chen: Francesco Belardinelli, Alessio Lomuscio, Emily Yu: model Checking Temporal Epistemic Logic under Bounded Recall(AAAI-2020)
文章研究了多智能体系统中的CTLK的模型检测问题。CTLK是CTL和Knowledge逻辑的结合,与之前工作不同的是,本文中考虑的语义是Bounded Recall的情况,即不是研究一个状态是否满足一个公式,而是研究一个历史(history, 状态的有穷串)是否满足一个公式。这使得这种逻辑可以描述更多的性质,并可以作为perfect recall条件下的近似方法,且复杂性比perfect recall低。文章也给出了模型检测算法和实验。
讨论和思考:
Bounded Recall和no recall时CTLK的表达能力是否不同?虽然Bounded Recall可以在CTL公式里描述一些类似于路径公式的性质,但no recall时好像可以通过添加类似“EX EX …”的方法代替。文章中没有对这一问题进行深入探讨,他们的想法大概是想在保留CTL model checking良好的复杂性的基础上,向CTL*贴近一些,扩展一下表达能力。但表达能力是否真的增强了,文章中没有给出严谨的讨论,给出的例子并没有解释清这件事。
Pengfei: Pranav Ashok, Vahid Hashemi, Jan Kretínský, Stefanie Mohr: DeepAbstract: Neural Network Abstraction for Accelerating Verification.
They introduce an abstraction framework applicable to fully-connected feed-forward neural networks based on clustering of neurons that behave similarly on some inputs. For the particular case of ReLU, we additionally provide error bounds incurred by the abstraction. We show how the abstraction reduces the size of the network, while preserving its accuracy, and how verification results on the abstract network can be transferred back to the original network.
Q: How to understand the error analysis? A: Basically the error comes from the abstraction and the perturbation. The first theorem gives the clustering-induced error, and the second theorem gives the upper bound of the total error.
Q: Does the clustering-induced error depend on sampling? A: Only the abstraction depends on sampling. The clustering-induced error actually works on a specific input, and \epsilon_i^l does not depend on sampling any more.
Q: Say more about choosing K_L? A: Basically the criteria is to keep the accuracy not decreasing by a given \alpha. When the abstraction is too corase, we use a binary search to try a larger K_L.