Researcher interests: Creating complex software is difficult, and so people have started using machine learning (ML) to generate software instead of writing it manually. The problem: ML-produced software is opaque to humans, and it is difficult to certify that it always behaves correctly. In safety-critical systems, i.e. autonomous vehicles, this raises serious concerns. As part of my research, I seek to develop techniques and tools that will allow us to formally reason about deep neural networks (a specific kind of ML-produced software), and in particular to verify that they are correct. Research on neural network verification is still at an early phase, and there are many challenges to overcome. First, there is the matter of scalability: the verification problem is NP-complete, i.e. is exponentially difficult in the size of the network, and so it is challenging to apply verification to large networks. Our research group is addressing this difficulty by applying state-of-the-art techniques from the fields of verification, linear programming and SMT solving. Another interesting topic is that of applications: verification of neural networks is a useful hammer, and we are just beginning to discover all the potential nails. One project where we've successfully applied this technique is the ACAS Xu system: an airborne collision-avoidance systems for drones, currently being developed by the US Federal Aviation Administration (FAA). The ACAS Xu system includes 45 deep neural networks, and it is very important that these networks behave correctly as to avoid potential mid-air collisions. Using verification, we were able to formally prove that this is indeed the case (for a list of desired properties), and in one case even detected a bug. Another interesting application is robustness to adversarial examples, where we use verification to prove that small perturbations to a network's inputs do not cause it to make mistakes.
DEPARTMENT: Computer Science
Contact Business Development: Anna Pellivert

Selected Publications

assaf marron, aviran sadon, gera weiss, Guy Katz
on the fly construction of composite events in scenario based modeling using constraint solvers (2019)| arXiv preprint arXiv:1909.00408| Read more
aleksandar zeljic, christopher lazarus, clark barrett, david l dill, derek a huang, duligur ibeling, haoze wu, Guy Katz, kyle d julian, mykel j kochenderfer, parth shah, rachel lim, shantanu thakoor
the marabou framework for verification and analysis of deep neural networks (2019)| International Conference on Computer Aided Verification| Read more
justin gottschlich, Guy Katz, yizhak yisrael elboher
an abstraction based framework for neural network verification (2019)| arXiv preprint arXiv:1910.14574| Read more
adi malca, alexander feldsher, clark barrett, Guy Katz, sumathi gokulanathan
simplifying neural networks with the marabou verification engine (2019)| arXiv preprint arXiv:1910.12396| Read more
assaf marron, aviran sadon, gera weiss, Guy Katz
on the fly construction of composite events in scenario based modeling using constraint solvers (2019)| Proceedings of the 7th International Conference on Model-Driven Engineering and Software Development| Read more
clark barrett, Guy Katz, Michael Schapira, yafim kazak
verifying deep rl driven systems (2019)| Proceedings of the 2019 Workshop on Network Meets AI & ML - NetAI'19| Read more
clark barrett, corina s păsăreanu, divya gopinath, Guy Katz
deepsafe a data driven approach for assessing robustness of neural networks (2018)| International Symposium on Automated Technology for Verification and Analysis| Read more
Efficient Distributed Execution of Multi-component Scenario-Based Models (2018)| Communications in Computer and Information Science| Read more
clark barrett, david l dill, Guy Katz, nicholas carlini
ground truth adversarial examples (2018)| arXiv: Learning| Read more
assaf marron, david harel, Guy Katz, rami marelly
wise computing toward endowing system development with proactive wisdom (2018)| IEEE Computer| Read more
clark barrett, justin e gottschlich, Guy Katz, kyle d julian, lindsey kuper, mykel j kochenderfer
toward scalable verification for safety critical deep networks (2018)| arXiv preprint arXiv:1801.05950| Read more
Finally, a peek inside the ‘black box’ of machine learning systems (2017)| Computation & Data, Technology & Society| Read more

Contact for more information:

Anna Pellivert
Contact ME: