Discussion meeting
This is a summary of the group meeting on 30 December 2020
Liang: Theoretically Principled Trade-off between Robustness and Accuracy, Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, Michael I. Jordan (ICML2019)
A state-of-the-art method to improve the DNN adversarial robustness, with less accuracy loss.
- The paper focus on the trade-off between robustness and accuracy, and show an upper bound on the gap between robust error and optimal natural error, which can be proved to be a tight bound.
- The bounds motivate the authors to minimize a new form of regularized surrogate loss, TRADES, for adversarial training.
- The new surrogate loss function has been implemented as a pytorch package and achieves good experiment results compared to other famous methods.
In particular, this paper shows some interesting understanding about adversarial robustness.
First, it gives an decomposition of robustness error as the incorporation of natural error and boundary error. Second, when we look at the TRADES loss function more carefully, we may find that the first term encourages the natural error to be optimized by minimizing the “difference” between f(X) and Y, while the second regularization term encourages the output to be smooth, that is, it pushes the decision boundary of classifier away from the sample instances via minimizing the “difference” between the prediction of natural example f(X) and that of adversarial example f(X0).
Jianlin: Actris: session-type based reasoning in separation logic, (POPL ’20)
Summary to be added.