Interesting papers: formally verifying DNNs

This post shares a few seed papers on formally verifying DNNs. I haven’t read most of them, but think more people should be aware that formally verifying somewhat large systems is already possible, and further progress on interpretability may get it even further.

IMO if you’re going to get benefit from this post, you’re going to need to turn up your mental clickthrough-skim-and-close rate. I’d suggest opening papers sequentially, setting at longest a two minute timer, skimming, favoriting into a feed on semanticscholar if desired, then closing the paper; then a few hours later, reopen the ones that were worth your time.

I recently discovered semanticscholar’s research feeds functionality, which gives a feed of new papers related to your library. I already loved semanticscholar for the ease of finding papers citing a paper. That said, I also really like Zotero, and I wish I could export papers from semanticscholar to zotero automatically.

abstracts are shortened with gpt3 (latest instruct variant) guided by backspace. these are basically all from the same research group—maybe folks should get in contact with them? are any of them here?


(pdf) - (semanticscholar: 17+ papers cite this)
Reluplex—a calculus for reasoning about deep neural networks.
DNNs used for complex problems; obstacle is difficulty of formal behavior guarantees. We present a technique for verifying or giving counterexample for properties of DNNs based on the simplex method, extended for non-convex ReLU. Evaluated on collision avoidance system. Results show proof for DNNs an order of magnitude larger.

… is cited by …

(pdf) - (semanticscholar: 9+ papers cite this)
Pruning and Slicing Neural Networks using Formal Verification
DNNs increasingly important. Engineers specify topology and use training algorithm to select weights, but topology selection often art, resulting in large and slow networks. We present methodology for discovering redundancies by using sound verification techniques to guarantee equivalence of simplified network to original, either completely or up to prescribed tolerance. Shows how to combine with slicing, creating small DNNs that are jointly equivalent to the original. DNNs significantly smaller and more suitable for deployment and formal verification.

… is cited by …

(pdf) - (semanticscholar)
On Optimizing Back-Substitution Methods for Neural Network Verification
Need for formal behavior guarantees of DNNs. Many proposed schemes, limited scalability or accuracy. Key component is bounds on values neurons can take for an input domain. Back-substitution is successful, but this paper presents a way to make it produce tighter bounds by formulating and minimizing imprecision errors incurred during back-substitution. Technique is general and can be integrated into existing symbolic-bound propagation techniques with minor modifications.

(pdf) - (fancy pdf) - (semantic scholar)
Minimal Multi-Layer Modifications of Deep Neural Networks
DNNs popular, but may err in safety-critical settings. Work on detecting errors, but removing them has received less attention. New tool, 3M-DNN, for repairing a DNN that is known to err. Procedure computes weight modification that corrects behavior, and attempts to minimize change via black-box verification engine. Our method is first to allow repairing by modifying multiple layers simultaneously by splitting network into sub-networks and applying repairing technique to each component. We evaluated 3M-DNN tool on benchmarks, obtaining promising results.

(pdf) - (semantic scholar)
Neural Network Verification with Proof Production
DNNs employed in safety critical systems; need to guarantee correctness. Multiple techniques/​tools for verifying DNNs, but when verifiers report no error exists, no way to ensure tool itself isn’t flawed. Presents mechanism for enhancing Simplex-based verifiers with proof production capabilities: generation of easy-to-check witness of unsatisfiability attesting to absence of errors. Based on efficient adaptation of Farkas’ lemma, combined with mechanisms for handling piecewise-linear functions and numerical precision errors. Implemented on Marabou DNN verifier as proof of concept.


p.s. readers: there are a whole bunch more interesting papers in these threads. I got bored of trying to pick favorites at 2am; please check out the more recent research by this group if these papers are interesting to you. There’s way more relevant research happening than lesswrong appears to me to know about, let’s get the topics indexed on here somehow! If this post sucks and I should post a better version, tell me so.

p.s. mods: I currently expect to make between one and six posts like this every two weeks, as such please only promote to the front page if a topic seems solid. other folks, skim my profile for more links.

p.p.s. I’m probably less able to comprehend the papers I shop than many; reading papers is hard, lets go shopping (for papers)! - use your judgement about which papers are insightful and skim them fast.

No comments.